00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 605 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3267 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.029 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.030 The recommended git tool is: git 00:00:00.030 using credential 00000000-0000-0000-0000-000000000002 00:00:00.032 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.046 Fetching changes from the remote Git repository 00:00:00.048 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.065 Using shallow fetch with depth 1 00:00:00.065 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.065 > git --version # timeout=10 00:00:00.091 > git --version # 'git version 2.39.2' 00:00:00.091 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.123 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.123 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.827 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.837 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.847 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:02.847 > git config core.sparsecheckout # timeout=10 00:00:02.858 > git read-tree -mu HEAD # timeout=10 00:00:02.873 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:02.889 Commit message: "inventory: add WCP3 to free inventory" 00:00:02.889 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:03.000 [Pipeline] Start of Pipeline 00:00:03.016 [Pipeline] library 00:00:03.018 Loading library shm_lib@master 00:00:03.018 Library shm_lib@master is cached. Copying from home. 00:00:03.033 [Pipeline] node 00:00:03.048 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.050 [Pipeline] { 00:00:03.060 [Pipeline] catchError 00:00:03.061 [Pipeline] { 00:00:03.072 [Pipeline] wrap 00:00:03.080 [Pipeline] { 00:00:03.087 [Pipeline] stage 00:00:03.088 [Pipeline] { (Prologue) 00:00:03.276 [Pipeline] sh 00:00:03.595 + logger -p user.info -t JENKINS-CI 00:00:03.615 [Pipeline] echo 00:00:03.617 Node: WFP8 00:00:03.624 [Pipeline] sh 00:00:03.918 [Pipeline] setCustomBuildProperty 00:00:03.929 [Pipeline] echo 00:00:03.931 Cleanup processes 00:00:03.936 [Pipeline] sh 00:00:04.226 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.226 2082101 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.237 [Pipeline] sh 00:00:04.519 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.519 ++ grep -v 'sudo pgrep' 00:00:04.519 ++ awk '{print $1}' 00:00:04.519 + sudo kill -9 00:00:04.519 + true 00:00:04.532 [Pipeline] cleanWs 00:00:04.541 [WS-CLEANUP] Deleting project workspace... 00:00:04.541 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.547 [WS-CLEANUP] done 00:00:04.551 [Pipeline] setCustomBuildProperty 00:00:04.563 [Pipeline] sh 00:00:04.841 + sudo git config --global --replace-all safe.directory '*' 00:00:04.904 [Pipeline] httpRequest 00:00:04.928 [Pipeline] echo 00:00:04.929 Sorcerer 10.211.164.101 is alive 00:00:04.935 [Pipeline] httpRequest 00:00:04.939 HttpMethod: GET 00:00:04.939 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:04.940 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:04.958 Response Code: HTTP/1.1 200 OK 00:00:04.958 Success: Status code 200 is in the accepted range: 200,404 00:00:04.959 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:25.778 [Pipeline] sh 00:00:26.062 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:26.081 [Pipeline] httpRequest 00:00:26.104 [Pipeline] echo 00:00:26.107 Sorcerer 10.211.164.101 is alive 00:00:26.116 [Pipeline] httpRequest 00:00:26.121 HttpMethod: GET 00:00:26.122 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:26.122 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:26.141 Response Code: HTTP/1.1 200 OK 00:00:26.142 Success: Status code 200 is in the accepted range: 200,404 00:00:26.143 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:17.480 [Pipeline] sh 00:01:17.767 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:20.312 [Pipeline] sh 00:01:20.626 + git -C spdk log --oneline -n5 00:01:20.626 719d03c6a sock/uring: only register net impl if supported 00:01:20.626 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:20.626 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:20.626 6c7c1f57e accel: add sequence outstanding stat 00:01:20.626 3bc8e6a26 accel: add utility to put task 00:01:20.649 [Pipeline] withCredentials 00:01:20.659 > git --version # timeout=10 00:01:20.671 > git --version # 'git version 2.39.2' 00:01:20.687 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:20.690 [Pipeline] { 00:01:20.698 [Pipeline] retry 00:01:20.700 [Pipeline] { 00:01:20.719 [Pipeline] sh 00:01:21.005 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:21.017 [Pipeline] } 00:01:21.041 [Pipeline] // retry 00:01:21.046 [Pipeline] } 00:01:21.069 [Pipeline] // withCredentials 00:01:21.082 [Pipeline] httpRequest 00:01:21.102 [Pipeline] echo 00:01:21.104 Sorcerer 10.211.164.101 is alive 00:01:21.113 [Pipeline] httpRequest 00:01:21.119 HttpMethod: GET 00:01:21.119 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:21.120 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:21.123 Response Code: HTTP/1.1 200 OK 00:01:21.123 Success: Status code 200 is in the accepted range: 200,404 00:01:21.124 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:29.638 [Pipeline] sh 00:01:29.922 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:31.310 [Pipeline] sh 00:01:31.591 + git -C dpdk log --oneline -n5 00:01:31.591 eeb0605f11 version: 23.11.0 00:01:31.591 238778122a doc: update release notes for 23.11 00:01:31.591 46aa6b3cfc doc: fix description of RSS features 00:01:31.591 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:31.591 7e421ae345 devtools: support skipping forbid rule check 00:01:31.602 [Pipeline] } 00:01:31.619 [Pipeline] // stage 00:01:31.629 [Pipeline] stage 00:01:31.631 [Pipeline] { (Prepare) 00:01:31.656 [Pipeline] writeFile 00:01:31.674 [Pipeline] sh 00:01:31.957 + logger -p user.info -t JENKINS-CI 00:01:31.975 [Pipeline] sh 00:01:32.264 + logger -p user.info -t JENKINS-CI 00:01:32.277 [Pipeline] sh 00:01:32.560 + cat autorun-spdk.conf 00:01:32.560 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.560 SPDK_TEST_NVMF=1 00:01:32.560 SPDK_TEST_NVME_CLI=1 00:01:32.560 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.560 SPDK_TEST_NVMF_NICS=e810 00:01:32.560 SPDK_TEST_VFIOUSER=1 00:01:32.560 SPDK_RUN_UBSAN=1 00:01:32.560 NET_TYPE=phy 00:01:32.561 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:32.561 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:32.568 RUN_NIGHTLY=1 00:01:32.573 [Pipeline] readFile 00:01:32.602 [Pipeline] withEnv 00:01:32.604 [Pipeline] { 00:01:32.619 [Pipeline] sh 00:01:32.904 + set -ex 00:01:32.904 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:32.904 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:32.904 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.904 ++ SPDK_TEST_NVMF=1 00:01:32.904 ++ SPDK_TEST_NVME_CLI=1 00:01:32.904 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.904 ++ SPDK_TEST_NVMF_NICS=e810 00:01:32.904 ++ SPDK_TEST_VFIOUSER=1 00:01:32.904 ++ SPDK_RUN_UBSAN=1 00:01:32.904 ++ NET_TYPE=phy 00:01:32.904 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:32.904 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:32.904 ++ RUN_NIGHTLY=1 00:01:32.904 + case $SPDK_TEST_NVMF_NICS in 00:01:32.904 + DRIVERS=ice 00:01:32.904 + [[ tcp == \r\d\m\a ]] 00:01:32.904 + [[ -n ice ]] 00:01:32.904 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:32.904 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:36.193 rmmod: ERROR: Module irdma is not currently loaded 00:01:36.193 rmmod: ERROR: Module i40iw is not currently loaded 00:01:36.193 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:36.193 + true 00:01:36.193 + for D in $DRIVERS 00:01:36.193 + sudo modprobe ice 00:01:36.193 + exit 0 00:01:36.202 [Pipeline] } 00:01:36.221 [Pipeline] // withEnv 00:01:36.227 [Pipeline] } 00:01:36.243 [Pipeline] // stage 00:01:36.253 [Pipeline] catchError 00:01:36.255 [Pipeline] { 00:01:36.270 [Pipeline] timeout 00:01:36.270 Timeout set to expire in 50 min 00:01:36.272 [Pipeline] { 00:01:36.287 [Pipeline] stage 00:01:36.289 [Pipeline] { (Tests) 00:01:36.304 [Pipeline] sh 00:01:36.625 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:36.625 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:36.625 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:36.625 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:36.625 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:36.625 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:36.625 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:36.625 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:36.625 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:36.625 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:36.625 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:36.625 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:36.625 + source /etc/os-release 00:01:36.625 ++ NAME='Fedora Linux' 00:01:36.625 ++ VERSION='38 (Cloud Edition)' 00:01:36.625 ++ ID=fedora 00:01:36.625 ++ VERSION_ID=38 00:01:36.625 ++ VERSION_CODENAME= 00:01:36.625 ++ PLATFORM_ID=platform:f38 00:01:36.625 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:36.625 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:36.625 ++ LOGO=fedora-logo-icon 00:01:36.625 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:36.625 ++ HOME_URL=https://fedoraproject.org/ 00:01:36.625 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:36.625 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:36.625 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:36.625 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:36.625 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:36.625 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:36.625 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:36.625 ++ SUPPORT_END=2024-05-14 00:01:36.625 ++ VARIANT='Cloud Edition' 00:01:36.625 ++ VARIANT_ID=cloud 00:01:36.625 + uname -a 00:01:36.625 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:36.625 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:39.161 Hugepages 00:01:39.161 node hugesize free / total 00:01:39.161 node0 1048576kB 0 / 0 00:01:39.161 node0 2048kB 2048 / 2048 00:01:39.161 node1 1048576kB 0 / 0 00:01:39.161 node1 2048kB 0 / 0 00:01:39.161 00:01:39.161 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:39.161 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:39.161 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:39.161 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:39.161 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:39.161 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:39.162 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:39.162 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:39.162 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:39.162 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:39.162 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:39.162 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:39.162 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:39.162 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:39.162 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:39.162 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:39.162 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:39.162 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:39.162 + rm -f /tmp/spdk-ld-path 00:01:39.162 + source autorun-spdk.conf 00:01:39.162 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.162 ++ SPDK_TEST_NVMF=1 00:01:39.162 ++ SPDK_TEST_NVME_CLI=1 00:01:39.162 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:39.162 ++ SPDK_TEST_NVMF_NICS=e810 00:01:39.162 ++ SPDK_TEST_VFIOUSER=1 00:01:39.162 ++ SPDK_RUN_UBSAN=1 00:01:39.162 ++ NET_TYPE=phy 00:01:39.162 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:39.162 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:39.162 ++ RUN_NIGHTLY=1 00:01:39.162 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:39.162 + [[ -n '' ]] 00:01:39.162 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:39.162 + for M in /var/spdk/build-*-manifest.txt 00:01:39.162 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:39.162 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:39.162 + for M in /var/spdk/build-*-manifest.txt 00:01:39.162 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:39.162 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:39.162 ++ uname 00:01:39.162 + [[ Linux == \L\i\n\u\x ]] 00:01:39.162 + sudo dmesg -T 00:01:39.162 + sudo dmesg --clear 00:01:39.421 + dmesg_pid=2083590 00:01:39.421 + [[ Fedora Linux == FreeBSD ]] 00:01:39.421 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:39.421 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:39.421 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:39.421 + [[ -x /usr/src/fio-static/fio ]] 00:01:39.421 + sudo dmesg -Tw 00:01:39.421 + export FIO_BIN=/usr/src/fio-static/fio 00:01:39.421 + FIO_BIN=/usr/src/fio-static/fio 00:01:39.421 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:39.421 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:39.421 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:39.421 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:39.421 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:39.421 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:39.421 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:39.421 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:39.421 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:39.421 Test configuration: 00:01:39.421 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.421 SPDK_TEST_NVMF=1 00:01:39.421 SPDK_TEST_NVME_CLI=1 00:01:39.421 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:39.421 SPDK_TEST_NVMF_NICS=e810 00:01:39.421 SPDK_TEST_VFIOUSER=1 00:01:39.421 SPDK_RUN_UBSAN=1 00:01:39.421 NET_TYPE=phy 00:01:39.421 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:39.421 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:39.421 RUN_NIGHTLY=1 10:10:24 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:39.421 10:10:24 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:39.421 10:10:24 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:39.421 10:10:24 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:39.421 10:10:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.421 10:10:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.421 10:10:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.421 10:10:24 -- paths/export.sh@5 -- $ export PATH 00:01:39.421 10:10:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.421 10:10:24 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:39.421 10:10:24 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:39.421 10:10:24 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720944624.XXXXXX 00:01:39.421 10:10:24 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720944624.PyEWiO 00:01:39.421 10:10:24 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:39.421 10:10:24 -- common/autobuild_common.sh@450 -- $ '[' -n v23.11 ']' 00:01:39.421 10:10:24 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:39.421 10:10:24 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:39.422 10:10:24 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:39.422 10:10:24 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:39.422 10:10:24 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:39.422 10:10:24 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:39.422 10:10:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.422 10:10:24 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:39.422 10:10:24 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:39.422 10:10:24 -- pm/common@17 -- $ local monitor 00:01:39.422 10:10:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.422 10:10:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.422 10:10:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.422 10:10:24 -- pm/common@21 -- $ date +%s 00:01:39.422 10:10:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.422 10:10:24 -- pm/common@21 -- $ date +%s 00:01:39.422 10:10:24 -- pm/common@25 -- $ sleep 1 00:01:39.422 10:10:24 -- pm/common@21 -- $ date +%s 00:01:39.422 10:10:24 -- pm/common@21 -- $ date +%s 00:01:39.422 10:10:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720944624 00:01:39.422 10:10:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720944624 00:01:39.422 10:10:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720944624 00:01:39.422 10:10:24 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720944624 00:01:39.422 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720944624_collect-vmstat.pm.log 00:01:39.422 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720944624_collect-cpu-load.pm.log 00:01:39.422 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720944624_collect-cpu-temp.pm.log 00:01:39.422 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720944624_collect-bmc-pm.bmc.pm.log 00:01:40.361 10:10:25 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:40.361 10:10:25 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:40.361 10:10:25 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:40.361 10:10:25 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:40.361 10:10:25 -- spdk/autobuild.sh@16 -- $ date -u 00:01:40.361 Sun Jul 14 08:10:25 AM UTC 2024 00:01:40.361 10:10:25 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:40.361 v24.09-pre-202-g719d03c6a 00:01:40.361 10:10:25 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:40.361 10:10:25 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:40.361 10:10:25 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:40.361 10:10:25 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:40.361 10:10:25 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:40.361 10:10:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.622 ************************************ 00:01:40.622 START TEST ubsan 00:01:40.622 ************************************ 00:01:40.622 10:10:25 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:40.622 using ubsan 00:01:40.622 00:01:40.622 real 0m0.000s 00:01:40.622 user 0m0.000s 00:01:40.622 sys 0m0.000s 00:01:40.622 10:10:25 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:40.622 10:10:25 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:40.622 ************************************ 00:01:40.622 END TEST ubsan 00:01:40.622 ************************************ 00:01:40.622 10:10:25 -- common/autotest_common.sh@1142 -- $ return 0 00:01:40.622 10:10:25 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:40.622 10:10:25 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:40.622 10:10:25 -- common/autobuild_common.sh@436 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:40.622 10:10:25 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:40.622 10:10:25 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:40.622 10:10:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.622 ************************************ 00:01:40.622 START TEST build_native_dpdk 00:01:40.622 ************************************ 00:01:40.622 10:10:25 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:40.622 eeb0605f11 version: 23.11.0 00:01:40.622 238778122a doc: update release notes for 23.11 00:01:40.622 46aa6b3cfc doc: fix description of RSS features 00:01:40.622 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:40.622 7e421ae345 devtools: support skipping forbid rule check 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:40.622 10:10:25 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:40.622 10:10:25 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:40.622 10:10:25 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:40.622 10:10:25 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:40.622 10:10:25 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:40.622 10:10:25 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:40.622 10:10:25 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:40.622 10:10:25 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:40.622 10:10:25 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:40.622 10:10:25 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:40.622 10:10:25 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:40.622 10:10:25 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:40.622 10:10:25 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:40.622 10:10:25 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:40.622 10:10:25 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:40.622 10:10:25 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:01:40.622 10:10:25 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:01:40.622 10:10:25 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:40.622 10:10:25 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:01:40.622 10:10:25 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:01:40.622 10:10:25 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:40.622 10:10:25 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:40.622 10:10:25 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:40.622 10:10:25 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:40.622 10:10:25 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:40.622 10:10:25 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:40.622 10:10:25 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:40.622 patching file config/rte_config.h 00:01:40.622 Hunk #1 succeeded at 60 (offset 1 line). 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:40.622 10:10:25 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:44.820 The Meson build system 00:01:44.820 Version: 1.3.1 00:01:44.820 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:44.820 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:44.820 Build type: native build 00:01:44.820 Program cat found: YES (/usr/bin/cat) 00:01:44.820 Project name: DPDK 00:01:44.820 Project version: 23.11.0 00:01:44.820 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:44.820 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:44.820 Host machine cpu family: x86_64 00:01:44.820 Host machine cpu: x86_64 00:01:44.820 Message: ## Building in Developer Mode ## 00:01:44.820 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:44.820 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:44.821 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:44.821 Program python3 found: YES (/usr/bin/python3) 00:01:44.821 Program cat found: YES (/usr/bin/cat) 00:01:44.821 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:44.821 Compiler for C supports arguments -march=native: YES 00:01:44.821 Checking for size of "void *" : 8 00:01:44.821 Checking for size of "void *" : 8 (cached) 00:01:44.821 Library m found: YES 00:01:44.821 Library numa found: YES 00:01:44.821 Has header "numaif.h" : YES 00:01:44.821 Library fdt found: NO 00:01:44.821 Library execinfo found: NO 00:01:44.821 Has header "execinfo.h" : YES 00:01:44.821 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:44.821 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:44.821 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:44.821 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:44.821 Run-time dependency openssl found: YES 3.0.9 00:01:44.821 Run-time dependency libpcap found: YES 1.10.4 00:01:44.821 Has header "pcap.h" with dependency libpcap: YES 00:01:44.821 Compiler for C supports arguments -Wcast-qual: YES 00:01:44.821 Compiler for C supports arguments -Wdeprecated: YES 00:01:44.821 Compiler for C supports arguments -Wformat: YES 00:01:44.821 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:44.821 Compiler for C supports arguments -Wformat-security: NO 00:01:44.821 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:44.821 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:44.821 Compiler for C supports arguments -Wnested-externs: YES 00:01:44.821 Compiler for C supports arguments -Wold-style-definition: YES 00:01:44.821 Compiler for C supports arguments -Wpointer-arith: YES 00:01:44.821 Compiler for C supports arguments -Wsign-compare: YES 00:01:44.821 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:44.821 Compiler for C supports arguments -Wundef: YES 00:01:44.821 Compiler for C supports arguments -Wwrite-strings: YES 00:01:44.821 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:44.821 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:44.821 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:44.821 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:44.821 Program objdump found: YES (/usr/bin/objdump) 00:01:44.821 Compiler for C supports arguments -mavx512f: YES 00:01:44.821 Checking if "AVX512 checking" compiles: YES 00:01:44.821 Fetching value of define "__SSE4_2__" : 1 00:01:44.821 Fetching value of define "__AES__" : 1 00:01:44.821 Fetching value of define "__AVX__" : 1 00:01:44.821 Fetching value of define "__AVX2__" : 1 00:01:44.821 Fetching value of define "__AVX512BW__" : 1 00:01:44.821 Fetching value of define "__AVX512CD__" : 1 00:01:44.821 Fetching value of define "__AVX512DQ__" : 1 00:01:44.821 Fetching value of define "__AVX512F__" : 1 00:01:44.821 Fetching value of define "__AVX512VL__" : 1 00:01:44.821 Fetching value of define "__PCLMUL__" : 1 00:01:44.821 Fetching value of define "__RDRND__" : 1 00:01:44.821 Fetching value of define "__RDSEED__" : 1 00:01:44.821 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:44.821 Fetching value of define "__znver1__" : (undefined) 00:01:44.821 Fetching value of define "__znver2__" : (undefined) 00:01:44.821 Fetching value of define "__znver3__" : (undefined) 00:01:44.821 Fetching value of define "__znver4__" : (undefined) 00:01:44.821 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:44.821 Message: lib/log: Defining dependency "log" 00:01:44.821 Message: lib/kvargs: Defining dependency "kvargs" 00:01:44.821 Message: lib/telemetry: Defining dependency "telemetry" 00:01:44.821 Checking for function "getentropy" : NO 00:01:44.821 Message: lib/eal: Defining dependency "eal" 00:01:44.821 Message: lib/ring: Defining dependency "ring" 00:01:44.821 Message: lib/rcu: Defining dependency "rcu" 00:01:44.821 Message: lib/mempool: Defining dependency "mempool" 00:01:44.821 Message: lib/mbuf: Defining dependency "mbuf" 00:01:44.821 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:44.821 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:44.821 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:44.821 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:44.821 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:44.821 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:44.821 Compiler for C supports arguments -mpclmul: YES 00:01:44.821 Compiler for C supports arguments -maes: YES 00:01:44.821 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:44.821 Compiler for C supports arguments -mavx512bw: YES 00:01:44.821 Compiler for C supports arguments -mavx512dq: YES 00:01:44.821 Compiler for C supports arguments -mavx512vl: YES 00:01:44.821 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:44.821 Compiler for C supports arguments -mavx2: YES 00:01:44.821 Compiler for C supports arguments -mavx: YES 00:01:44.821 Message: lib/net: Defining dependency "net" 00:01:44.821 Message: lib/meter: Defining dependency "meter" 00:01:44.821 Message: lib/ethdev: Defining dependency "ethdev" 00:01:44.821 Message: lib/pci: Defining dependency "pci" 00:01:44.821 Message: lib/cmdline: Defining dependency "cmdline" 00:01:44.821 Message: lib/metrics: Defining dependency "metrics" 00:01:44.821 Message: lib/hash: Defining dependency "hash" 00:01:44.821 Message: lib/timer: Defining dependency "timer" 00:01:44.821 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:44.821 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:44.821 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:44.821 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:44.821 Message: lib/acl: Defining dependency "acl" 00:01:44.821 Message: lib/bbdev: Defining dependency "bbdev" 00:01:44.821 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:44.821 Run-time dependency libelf found: YES 0.190 00:01:44.821 Message: lib/bpf: Defining dependency "bpf" 00:01:44.821 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:44.821 Message: lib/compressdev: Defining dependency "compressdev" 00:01:44.821 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:44.821 Message: lib/distributor: Defining dependency "distributor" 00:01:44.821 Message: lib/dmadev: Defining dependency "dmadev" 00:01:44.821 Message: lib/efd: Defining dependency "efd" 00:01:44.821 Message: lib/eventdev: Defining dependency "eventdev" 00:01:44.821 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:44.821 Message: lib/gpudev: Defining dependency "gpudev" 00:01:44.821 Message: lib/gro: Defining dependency "gro" 00:01:44.821 Message: lib/gso: Defining dependency "gso" 00:01:44.821 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:44.821 Message: lib/jobstats: Defining dependency "jobstats" 00:01:44.821 Message: lib/latencystats: Defining dependency "latencystats" 00:01:44.821 Message: lib/lpm: Defining dependency "lpm" 00:01:44.821 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:44.821 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:44.821 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:44.821 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:44.821 Message: lib/member: Defining dependency "member" 00:01:44.821 Message: lib/pcapng: Defining dependency "pcapng" 00:01:44.821 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:44.821 Message: lib/power: Defining dependency "power" 00:01:44.821 Message: lib/rawdev: Defining dependency "rawdev" 00:01:44.821 Message: lib/regexdev: Defining dependency "regexdev" 00:01:44.821 Message: lib/mldev: Defining dependency "mldev" 00:01:44.821 Message: lib/rib: Defining dependency "rib" 00:01:44.821 Message: lib/reorder: Defining dependency "reorder" 00:01:44.821 Message: lib/sched: Defining dependency "sched" 00:01:44.821 Message: lib/security: Defining dependency "security" 00:01:44.821 Message: lib/stack: Defining dependency "stack" 00:01:44.821 Has header "linux/userfaultfd.h" : YES 00:01:44.821 Has header "linux/vduse.h" : YES 00:01:44.821 Message: lib/vhost: Defining dependency "vhost" 00:01:44.821 Message: lib/ipsec: Defining dependency "ipsec" 00:01:44.821 Message: lib/pdcp: Defining dependency "pdcp" 00:01:44.821 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:44.821 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:44.821 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:44.821 Message: lib/fib: Defining dependency "fib" 00:01:44.821 Message: lib/port: Defining dependency "port" 00:01:44.821 Message: lib/pdump: Defining dependency "pdump" 00:01:44.821 Message: lib/table: Defining dependency "table" 00:01:44.821 Message: lib/pipeline: Defining dependency "pipeline" 00:01:44.821 Message: lib/graph: Defining dependency "graph" 00:01:44.821 Message: lib/node: Defining dependency "node" 00:01:44.821 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:46.200 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:46.200 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:46.200 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:46.200 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:46.201 Compiler for C supports arguments -Wno-unused-value: YES 00:01:46.201 Compiler for C supports arguments -Wno-format: YES 00:01:46.201 Compiler for C supports arguments -Wno-format-security: YES 00:01:46.201 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:46.201 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:46.201 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:46.201 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:46.201 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:46.201 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:46.201 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:46.201 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:46.201 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:46.201 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:46.201 Has header "sys/epoll.h" : YES 00:01:46.201 Program doxygen found: YES (/usr/bin/doxygen) 00:01:46.201 Configuring doxy-api-html.conf using configuration 00:01:46.201 Configuring doxy-api-man.conf using configuration 00:01:46.201 Program mandb found: YES (/usr/bin/mandb) 00:01:46.201 Program sphinx-build found: NO 00:01:46.201 Configuring rte_build_config.h using configuration 00:01:46.201 Message: 00:01:46.201 ================= 00:01:46.201 Applications Enabled 00:01:46.201 ================= 00:01:46.201 00:01:46.201 apps: 00:01:46.201 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:46.201 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:46.201 test-pmd, test-regex, test-sad, test-security-perf, 00:01:46.201 00:01:46.201 Message: 00:01:46.201 ================= 00:01:46.201 Libraries Enabled 00:01:46.201 ================= 00:01:46.201 00:01:46.201 libs: 00:01:46.201 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:46.201 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:46.201 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:46.201 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:46.201 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:46.201 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:46.201 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:46.201 00:01:46.201 00:01:46.201 Message: 00:01:46.201 =============== 00:01:46.201 Drivers Enabled 00:01:46.201 =============== 00:01:46.201 00:01:46.201 common: 00:01:46.201 00:01:46.201 bus: 00:01:46.201 pci, vdev, 00:01:46.201 mempool: 00:01:46.201 ring, 00:01:46.201 dma: 00:01:46.201 00:01:46.201 net: 00:01:46.201 i40e, 00:01:46.201 raw: 00:01:46.201 00:01:46.201 crypto: 00:01:46.201 00:01:46.201 compress: 00:01:46.201 00:01:46.201 regex: 00:01:46.201 00:01:46.201 ml: 00:01:46.201 00:01:46.201 vdpa: 00:01:46.201 00:01:46.201 event: 00:01:46.201 00:01:46.201 baseband: 00:01:46.201 00:01:46.201 gpu: 00:01:46.201 00:01:46.201 00:01:46.201 Message: 00:01:46.201 ================= 00:01:46.201 Content Skipped 00:01:46.201 ================= 00:01:46.201 00:01:46.201 apps: 00:01:46.201 00:01:46.201 libs: 00:01:46.201 00:01:46.201 drivers: 00:01:46.201 common/cpt: not in enabled drivers build config 00:01:46.201 common/dpaax: not in enabled drivers build config 00:01:46.201 common/iavf: not in enabled drivers build config 00:01:46.201 common/idpf: not in enabled drivers build config 00:01:46.201 common/mvep: not in enabled drivers build config 00:01:46.201 common/octeontx: not in enabled drivers build config 00:01:46.201 bus/auxiliary: not in enabled drivers build config 00:01:46.201 bus/cdx: not in enabled drivers build config 00:01:46.201 bus/dpaa: not in enabled drivers build config 00:01:46.201 bus/fslmc: not in enabled drivers build config 00:01:46.201 bus/ifpga: not in enabled drivers build config 00:01:46.201 bus/platform: not in enabled drivers build config 00:01:46.201 bus/vmbus: not in enabled drivers build config 00:01:46.201 common/cnxk: not in enabled drivers build config 00:01:46.201 common/mlx5: not in enabled drivers build config 00:01:46.201 common/nfp: not in enabled drivers build config 00:01:46.201 common/qat: not in enabled drivers build config 00:01:46.201 common/sfc_efx: not in enabled drivers build config 00:01:46.201 mempool/bucket: not in enabled drivers build config 00:01:46.201 mempool/cnxk: not in enabled drivers build config 00:01:46.201 mempool/dpaa: not in enabled drivers build config 00:01:46.201 mempool/dpaa2: not in enabled drivers build config 00:01:46.201 mempool/octeontx: not in enabled drivers build config 00:01:46.201 mempool/stack: not in enabled drivers build config 00:01:46.201 dma/cnxk: not in enabled drivers build config 00:01:46.201 dma/dpaa: not in enabled drivers build config 00:01:46.201 dma/dpaa2: not in enabled drivers build config 00:01:46.201 dma/hisilicon: not in enabled drivers build config 00:01:46.201 dma/idxd: not in enabled drivers build config 00:01:46.201 dma/ioat: not in enabled drivers build config 00:01:46.201 dma/skeleton: not in enabled drivers build config 00:01:46.201 net/af_packet: not in enabled drivers build config 00:01:46.201 net/af_xdp: not in enabled drivers build config 00:01:46.201 net/ark: not in enabled drivers build config 00:01:46.201 net/atlantic: not in enabled drivers build config 00:01:46.201 net/avp: not in enabled drivers build config 00:01:46.201 net/axgbe: not in enabled drivers build config 00:01:46.201 net/bnx2x: not in enabled drivers build config 00:01:46.201 net/bnxt: not in enabled drivers build config 00:01:46.201 net/bonding: not in enabled drivers build config 00:01:46.201 net/cnxk: not in enabled drivers build config 00:01:46.201 net/cpfl: not in enabled drivers build config 00:01:46.201 net/cxgbe: not in enabled drivers build config 00:01:46.201 net/dpaa: not in enabled drivers build config 00:01:46.201 net/dpaa2: not in enabled drivers build config 00:01:46.201 net/e1000: not in enabled drivers build config 00:01:46.201 net/ena: not in enabled drivers build config 00:01:46.201 net/enetc: not in enabled drivers build config 00:01:46.201 net/enetfec: not in enabled drivers build config 00:01:46.201 net/enic: not in enabled drivers build config 00:01:46.201 net/failsafe: not in enabled drivers build config 00:01:46.201 net/fm10k: not in enabled drivers build config 00:01:46.201 net/gve: not in enabled drivers build config 00:01:46.201 net/hinic: not in enabled drivers build config 00:01:46.201 net/hns3: not in enabled drivers build config 00:01:46.201 net/iavf: not in enabled drivers build config 00:01:46.201 net/ice: not in enabled drivers build config 00:01:46.201 net/idpf: not in enabled drivers build config 00:01:46.201 net/igc: not in enabled drivers build config 00:01:46.201 net/ionic: not in enabled drivers build config 00:01:46.201 net/ipn3ke: not in enabled drivers build config 00:01:46.201 net/ixgbe: not in enabled drivers build config 00:01:46.201 net/mana: not in enabled drivers build config 00:01:46.201 net/memif: not in enabled drivers build config 00:01:46.201 net/mlx4: not in enabled drivers build config 00:01:46.201 net/mlx5: not in enabled drivers build config 00:01:46.201 net/mvneta: not in enabled drivers build config 00:01:46.201 net/mvpp2: not in enabled drivers build config 00:01:46.201 net/netvsc: not in enabled drivers build config 00:01:46.201 net/nfb: not in enabled drivers build config 00:01:46.201 net/nfp: not in enabled drivers build config 00:01:46.201 net/ngbe: not in enabled drivers build config 00:01:46.201 net/null: not in enabled drivers build config 00:01:46.201 net/octeontx: not in enabled drivers build config 00:01:46.201 net/octeon_ep: not in enabled drivers build config 00:01:46.201 net/pcap: not in enabled drivers build config 00:01:46.201 net/pfe: not in enabled drivers build config 00:01:46.201 net/qede: not in enabled drivers build config 00:01:46.201 net/ring: not in enabled drivers build config 00:01:46.201 net/sfc: not in enabled drivers build config 00:01:46.201 net/softnic: not in enabled drivers build config 00:01:46.201 net/tap: not in enabled drivers build config 00:01:46.201 net/thunderx: not in enabled drivers build config 00:01:46.201 net/txgbe: not in enabled drivers build config 00:01:46.201 net/vdev_netvsc: not in enabled drivers build config 00:01:46.201 net/vhost: not in enabled drivers build config 00:01:46.201 net/virtio: not in enabled drivers build config 00:01:46.201 net/vmxnet3: not in enabled drivers build config 00:01:46.201 raw/cnxk_bphy: not in enabled drivers build config 00:01:46.201 raw/cnxk_gpio: not in enabled drivers build config 00:01:46.201 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:46.201 raw/ifpga: not in enabled drivers build config 00:01:46.201 raw/ntb: not in enabled drivers build config 00:01:46.201 raw/skeleton: not in enabled drivers build config 00:01:46.201 crypto/armv8: not in enabled drivers build config 00:01:46.201 crypto/bcmfs: not in enabled drivers build config 00:01:46.201 crypto/caam_jr: not in enabled drivers build config 00:01:46.201 crypto/ccp: not in enabled drivers build config 00:01:46.201 crypto/cnxk: not in enabled drivers build config 00:01:46.201 crypto/dpaa_sec: not in enabled drivers build config 00:01:46.201 crypto/dpaa2_sec: not in enabled drivers build config 00:01:46.201 crypto/ipsec_mb: not in enabled drivers build config 00:01:46.201 crypto/mlx5: not in enabled drivers build config 00:01:46.201 crypto/mvsam: not in enabled drivers build config 00:01:46.201 crypto/nitrox: not in enabled drivers build config 00:01:46.201 crypto/null: not in enabled drivers build config 00:01:46.201 crypto/octeontx: not in enabled drivers build config 00:01:46.201 crypto/openssl: not in enabled drivers build config 00:01:46.201 crypto/scheduler: not in enabled drivers build config 00:01:46.201 crypto/uadk: not in enabled drivers build config 00:01:46.201 crypto/virtio: not in enabled drivers build config 00:01:46.201 compress/isal: not in enabled drivers build config 00:01:46.201 compress/mlx5: not in enabled drivers build config 00:01:46.201 compress/octeontx: not in enabled drivers build config 00:01:46.201 compress/zlib: not in enabled drivers build config 00:01:46.201 regex/mlx5: not in enabled drivers build config 00:01:46.201 regex/cn9k: not in enabled drivers build config 00:01:46.201 ml/cnxk: not in enabled drivers build config 00:01:46.201 vdpa/ifc: not in enabled drivers build config 00:01:46.201 vdpa/mlx5: not in enabled drivers build config 00:01:46.201 vdpa/nfp: not in enabled drivers build config 00:01:46.201 vdpa/sfc: not in enabled drivers build config 00:01:46.201 event/cnxk: not in enabled drivers build config 00:01:46.201 event/dlb2: not in enabled drivers build config 00:01:46.202 event/dpaa: not in enabled drivers build config 00:01:46.202 event/dpaa2: not in enabled drivers build config 00:01:46.202 event/dsw: not in enabled drivers build config 00:01:46.202 event/opdl: not in enabled drivers build config 00:01:46.202 event/skeleton: not in enabled drivers build config 00:01:46.202 event/sw: not in enabled drivers build config 00:01:46.202 event/octeontx: not in enabled drivers build config 00:01:46.202 baseband/acc: not in enabled drivers build config 00:01:46.202 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:46.202 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:46.202 baseband/la12xx: not in enabled drivers build config 00:01:46.202 baseband/null: not in enabled drivers build config 00:01:46.202 baseband/turbo_sw: not in enabled drivers build config 00:01:46.202 gpu/cuda: not in enabled drivers build config 00:01:46.202 00:01:46.202 00:01:46.202 Build targets in project: 217 00:01:46.202 00:01:46.202 DPDK 23.11.0 00:01:46.202 00:01:46.202 User defined options 00:01:46.202 libdir : lib 00:01:46.202 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:46.202 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:46.202 c_link_args : 00:01:46.202 enable_docs : false 00:01:46.202 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:46.202 enable_kmods : false 00:01:46.202 machine : native 00:01:46.202 tests : false 00:01:46.202 00:01:46.202 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:46.202 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:46.202 10:10:31 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 00:01:46.468 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:46.468 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:46.468 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:46.468 [3/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:46.468 [4/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:46.468 [5/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:46.468 [6/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:46.468 [7/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:46.468 [8/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:46.468 [9/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:46.468 [10/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:46.468 [11/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:46.729 [12/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:46.729 [13/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:46.729 [14/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:46.729 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:46.729 [16/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:46.729 [17/707] Linking static target lib/librte_kvargs.a 00:01:46.729 [18/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:46.729 [19/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:46.729 [20/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:46.729 [21/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:46.729 [22/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:46.729 [23/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:46.729 [24/707] Linking static target lib/librte_log.a 00:01:46.729 [25/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:46.729 [26/707] Linking static target lib/librte_pci.a 00:01:46.729 [27/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:46.729 [28/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:46.729 [29/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:46.729 [30/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:46.729 [31/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:46.730 [32/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:46.730 [33/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:46.989 [34/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:46.989 [35/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:46.989 [36/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:46.989 [37/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.989 [38/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:46.989 [39/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.989 [40/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:46.989 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:46.989 [42/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:46.989 [43/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:46.989 [44/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:46.989 [45/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:46.989 [46/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:46.989 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:46.989 [48/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:47.250 [49/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:47.250 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:47.250 [51/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:47.250 [52/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:47.250 [53/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:47.250 [54/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:47.250 [55/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:47.250 [56/707] Linking static target lib/librte_meter.a 00:01:47.250 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:47.250 [58/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:47.250 [59/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:47.250 [60/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:47.250 [61/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:47.250 [62/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:47.250 [63/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:47.250 [64/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:47.250 [65/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:47.250 [66/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:47.250 [67/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:47.250 [68/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:47.250 [69/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:47.250 [70/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:47.250 [71/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:47.250 [72/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:47.250 [73/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:47.250 [74/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:47.251 [75/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:47.251 [76/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:47.251 [77/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:47.251 [78/707] Linking static target lib/librte_ring.a 00:01:47.251 [79/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:47.251 [80/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:47.251 [81/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:47.251 [82/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:47.251 [83/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:47.251 [84/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:47.251 [85/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:47.251 [86/707] Linking static target lib/librte_cmdline.a 00:01:47.251 [87/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:47.251 [88/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:47.251 [89/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:47.251 [90/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:47.251 [91/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:47.251 [92/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:47.251 [93/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:47.512 [94/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:47.512 [95/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.512 [96/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:47.512 [97/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:47.512 [98/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:47.512 [99/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:47.512 [100/707] Linking target lib/librte_log.so.24.0 00:01:47.512 [101/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:47.512 [102/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:47.512 [103/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:47.512 [104/707] Linking static target lib/librte_net.a 00:01:47.512 [105/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:47.512 [106/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.512 [107/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:47.512 [108/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:47.512 [109/707] Linking static target lib/librte_metrics.a 00:01:47.512 [110/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:47.512 [111/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:47.512 [112/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:47.512 [113/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:47.512 [114/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:47.512 [115/707] Linking static target lib/librte_cfgfile.a 00:01:47.512 [116/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:47.512 [117/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:47.512 [118/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:47.512 [119/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:47.512 [120/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:47.772 [121/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.772 [122/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:47.772 [123/707] Linking target lib/librte_kvargs.so.24.0 00:01:47.772 [124/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:47.772 [125/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:47.772 [126/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:47.772 [127/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:47.772 [128/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:47.772 [129/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:47.772 [130/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:47.772 [131/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:47.772 [132/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:47.772 [133/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:47.772 [134/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:47.772 [135/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.772 [136/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:47.772 [137/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:47.772 [138/707] Linking static target lib/librte_bitratestats.a 00:01:47.772 [139/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:47.772 [140/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:47.772 [141/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:47.772 [142/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:47.772 [143/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:47.772 [144/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:47.772 [145/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:47.772 [146/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:47.772 [147/707] Linking static target lib/librte_mempool.a 00:01:47.772 [148/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:47.772 [149/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:47.772 [150/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:48.031 [151/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:48.031 [152/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:48.031 [153/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:48.031 [154/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:48.031 [155/707] Linking static target lib/librte_timer.a 00:01:48.031 [156/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:48.031 [157/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.031 [158/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:48.031 [159/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:48.031 [160/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:48.031 [161/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:48.031 [162/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:48.031 [163/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:48.031 [164/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:48.031 [165/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:48.031 [166/707] Linking static target lib/librte_compressdev.a 00:01:48.031 [167/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.031 [168/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:48.031 [169/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:48.031 [170/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:48.031 [171/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:48.031 [172/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:48.031 [173/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.031 [174/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:48.031 [175/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:48.031 [176/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:48.031 [177/707] Linking static target lib/librte_jobstats.a 00:01:48.291 [178/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:48.291 [179/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:48.291 [180/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:48.291 [181/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:48.291 [182/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:48.291 [183/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:48.291 [184/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:48.291 [185/707] Linking static target lib/librte_rcu.a 00:01:48.291 [186/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:48.291 [187/707] Linking static target lib/librte_bbdev.a 00:01:48.291 [188/707] Linking static target lib/librte_dispatcher.a 00:01:48.291 [189/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:48.291 [190/707] Linking static target lib/librte_telemetry.a 00:01:48.291 [191/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:48.291 [192/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:48.291 [193/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:48.291 [194/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:48.291 [195/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:48.291 [196/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:48.291 [197/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:48.291 [198/707] Linking static target lib/librte_eal.a 00:01:48.291 [199/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:48.291 [200/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:48.291 [201/707] Linking static target lib/librte_gro.a 00:01:48.291 [202/707] Linking static target lib/librte_gpudev.a 00:01:48.291 [203/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:48.291 [204/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:48.291 [205/707] Linking static target lib/librte_dmadev.a 00:01:48.291 [206/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:48.291 [207/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:48.291 [208/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:48.291 [209/707] Linking static target lib/librte_distributor.a 00:01:48.291 [210/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:48.557 [211/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:48.557 [212/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:48.557 [213/707] Linking static target lib/librte_latencystats.a 00:01:48.557 [214/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:48.557 [215/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.557 [216/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:48.557 [217/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:48.557 [218/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:48.557 [219/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:48.557 [220/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:48.557 [221/707] Linking static target lib/librte_gso.a 00:01:48.557 [222/707] Linking static target lib/librte_mbuf.a 00:01:48.557 [223/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:48.557 [224/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:48.557 [225/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.557 [226/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:48.557 [227/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:48.557 [228/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:48.557 [229/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:48.557 [230/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:48.557 [231/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:48.557 [232/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.557 [233/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:48.557 [234/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:48.557 [235/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:48.557 [236/707] Linking static target lib/librte_ip_frag.a 00:01:48.557 [237/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:48.557 [238/707] Linking static target lib/librte_stack.a 00:01:48.557 [239/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:48.557 [240/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.557 [241/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:48.821 [242/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:48.821 [243/707] Linking static target lib/librte_regexdev.a 00:01:48.821 [244/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:48.821 [245/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.821 [246/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:48.821 [247/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:48.821 [248/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.821 [249/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.821 [250/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.821 [251/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.821 [252/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:48.821 [253/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:48.821 [254/707] Linking static target lib/librte_mldev.a 00:01:48.821 [255/707] Linking static target lib/librte_rawdev.a 00:01:48.821 [256/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:48.821 [257/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.821 [258/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:48.821 [259/707] Linking static target lib/librte_power.a 00:01:48.821 [260/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:48.821 [261/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:48.821 [262/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:48.821 [263/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.821 [264/707] Linking static target lib/librte_pcapng.a 00:01:48.821 [265/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:48.821 [266/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:48.821 [267/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:48.821 [268/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:48.821 [269/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.821 [270/707] Linking static target lib/librte_bpf.a 00:01:48.821 [271/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:48.821 [272/707] Linking static target lib/librte_reorder.a 00:01:48.821 [273/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.821 [274/707] Linking target lib/librte_telemetry.so.24.0 00:01:48.821 [275/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:48.821 [276/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:49.080 [277/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.080 [278/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:49.080 [279/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:49.080 [280/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:49.080 [281/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.080 [282/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:49.080 [283/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:49.080 [284/707] Linking static target lib/librte_security.a 00:01:49.080 [285/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:49.080 [286/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:49.080 [287/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:49.080 [288/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:49.080 [289/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.080 [290/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:49.080 [291/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:49.080 [292/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:49.080 [293/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:49.080 [294/707] Linking static target lib/librte_lpm.a 00:01:49.346 [295/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:49.346 [296/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:49.346 [297/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.346 [298/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:49.346 [299/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:49.346 [300/707] Linking static target lib/librte_rib.a 00:01:49.346 [301/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:49.346 [302/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:49.346 [303/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:49.346 [304/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.346 [305/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:49.346 [306/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.346 [307/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:49.346 [308/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:49.346 [309/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.346 [310/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:49.346 [311/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:49.346 [312/707] Linking static target lib/librte_efd.a 00:01:49.605 [313/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:49.605 [314/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:49.605 [315/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:49.605 [316/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.605 [317/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:49.605 [318/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:49.605 [319/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:49.605 [320/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:49.605 [321/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:49.605 [322/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:49.605 [323/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:49.605 [324/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:49.605 [325/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:49.605 [326/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:49.605 [327/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:49.605 [328/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:49.605 [329/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.605 [330/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:49.605 [331/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:49.605 [332/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:49.605 [333/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.605 [334/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:49.605 [335/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:49.605 [336/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:49.605 [337/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:49.605 [338/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.605 [339/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.867 [340/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:49.867 [341/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.867 [342/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:49.867 [343/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:49.867 [344/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.867 [345/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:49.867 [346/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:49.867 [347/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:49.867 [348/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:49.867 [349/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:49.867 [350/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:49.867 [351/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:49.867 [352/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:49.867 [353/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:49.867 [354/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:49.867 [355/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:49.867 [356/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:49.867 [357/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:49.867 [358/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.129 [359/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:50.129 [360/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:50.129 [361/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:50.129 [362/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:50.129 [363/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:50.129 [364/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:50.129 [365/707] Linking static target lib/librte_fib.a 00:01:50.129 [366/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:50.129 [367/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:50.129 [368/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:50.129 [369/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:50.129 [370/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:50.129 [371/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:50.129 [372/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:50.129 [373/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:50.129 [374/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:50.129 [375/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:50.387 [376/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:50.387 [377/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:50.388 [378/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:50.388 [379/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:50.388 [380/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:50.388 [381/707] Linking static target lib/librte_pdump.a 00:01:50.388 [382/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:50.388 [383/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:50.388 [384/707] Linking static target lib/librte_graph.a 00:01:50.388 [385/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:50.388 [386/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:50.388 [387/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:50.388 [388/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:50.388 [389/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:50.388 [390/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:50.388 [391/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:50.388 [392/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:50.388 [393/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:50.388 [394/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:50.388 [395/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:50.651 [396/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:50.651 [397/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:50.651 [398/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:50.651 [399/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:50.651 [400/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:50.651 [401/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.651 [402/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:50.651 [403/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:50.651 [404/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:50.651 [405/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:50.651 [406/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:50.651 [407/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:50.651 [408/707] Linking static target drivers/librte_bus_vdev.a 00:01:50.651 [409/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:50.651 [410/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:50.651 [411/707] Linking static target lib/librte_cryptodev.a 00:01:50.651 [412/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.651 [413/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:50.651 [414/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:50.651 [415/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:50.651 [416/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.651 [417/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:50.651 [418/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:50.651 [419/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:50.651 [420/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:50.651 [421/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:50.651 [422/707] Linking static target lib/librte_table.a 00:01:50.651 [423/707] Linking static target lib/librte_sched.a 00:01:50.651 [424/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:50.651 [425/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:50.917 [426/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:50.917 [427/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:50.917 [428/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:50.917 [429/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:50.917 [430/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:50.917 [431/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:50.917 [432/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:50.917 [433/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:50.917 [434/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:50.917 [435/707] Linking static target drivers/librte_bus_pci.a 00:01:50.917 [436/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:50.917 [437/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:50.917 [438/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:50.917 [439/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:50.917 [440/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:50.917 [441/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:50.917 [442/707] Linking static target lib/librte_ipsec.a 00:01:50.917 [443/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:50.917 [444/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:50.917 [445/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:51.176 [446/707] Linking static target lib/librte_member.a 00:01:51.176 [447/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:51.176 [448/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.176 [449/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:51.176 [450/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:51.176 [451/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:51.176 [452/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:51.176 [453/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.176 [454/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:51.177 [455/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:51.177 [456/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:51.177 [457/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:51.177 [458/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:51.177 [459/707] Linking static target lib/librte_hash.a 00:01:51.177 [460/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:51.177 [461/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:51.177 [462/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:51.177 [463/707] Linking static target lib/librte_node.a 00:01:51.436 [464/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:51.436 [465/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:51.436 [466/707] Linking static target lib/acl/libavx2_tmp.a 00:01:51.436 [467/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:51.436 [468/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:51.436 [469/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.436 [470/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:51.436 [471/707] Linking static target lib/librte_pdcp.a 00:01:51.436 [472/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:51.436 [473/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:51.436 [474/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:51.436 [475/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:51.436 [476/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:51.436 [477/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:51.436 [478/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:51.436 [479/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:51.436 [480/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:51.437 [481/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:51.437 [482/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:51.437 [483/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:51.437 [484/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:51.437 [485/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.437 [486/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:51.437 [487/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.437 [488/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:51.437 [489/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:51.437 [490/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:51.695 [491/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:51.695 [492/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:51.695 [493/707] Linking static target drivers/librte_mempool_ring.a 00:01:51.695 [494/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:51.695 [495/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:51.695 [496/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:51.695 [497/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:51.695 [498/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:51.695 [499/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.695 [500/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:51.695 [501/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:51.695 [502/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:51.695 [503/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:51.695 [504/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:51.695 [505/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:51.695 [506/707] Linking static target lib/librte_port.a 00:01:51.695 [507/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:51.695 [508/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:51.695 [509/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:51.695 [510/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.695 [511/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:51.695 [512/707] Linking static target lib/librte_eventdev.a 00:01:51.695 [513/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:51.695 [514/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:51.695 [515/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.695 [516/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:51.695 [517/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:51.695 [518/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:51.695 [519/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:51.695 [520/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.695 [521/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:51.695 [522/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:51.695 [523/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:51.695 [524/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:51.953 [525/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:51.953 [526/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.953 [527/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:51.953 [528/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:51.953 [529/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:51.953 [530/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:51.953 [531/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:51.953 [532/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:51.953 [533/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:51.953 [534/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:51.953 [535/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:51.953 [536/707] Linking static target lib/librte_acl.a 00:01:51.953 [537/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:51.953 [538/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:52.211 [539/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:52.211 [540/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:52.211 [541/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:52.211 [542/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:52.211 [543/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:52.211 [544/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:52.211 [545/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:52.211 [546/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:52.211 [547/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.211 [548/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:52.211 [549/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:52.211 [550/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:52.211 [551/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:52.211 [552/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:52.211 [553/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:52.211 [554/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:52.211 [555/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:52.211 [556/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:52.211 [557/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:52.211 [558/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.211 [559/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:52.211 [560/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:52.211 [561/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:52.468 [562/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.468 [563/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:52.468 [564/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:52.468 [565/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:52.468 [566/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:52.468 [567/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:52.468 [568/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:52.468 [569/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:52.726 [570/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:52.984 [571/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:52.984 [572/707] Linking static target lib/librte_ethdev.a 00:01:52.984 [573/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:52.984 [574/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:53.243 [575/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:53.502 [576/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:53.502 [577/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:53.761 [578/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:53.761 [579/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:54.019 [580/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:54.618 [581/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:54.618 [582/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:54.618 [583/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.618 [584/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:54.877 [585/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:54.877 [586/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:54.877 [587/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:54.877 [588/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:54.877 [589/707] Linking static target drivers/librte_net_i40e.a 00:01:55.814 [590/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:55.814 [591/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.383 [592/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:58.286 [593/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.286 [594/707] Linking target lib/librte_eal.so.24.0 00:01:58.286 [595/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:58.286 [596/707] Linking target lib/librte_ring.so.24.0 00:01:58.286 [597/707] Linking target lib/librte_meter.so.24.0 00:01:58.286 [598/707] Linking target lib/librte_pci.so.24.0 00:01:58.286 [599/707] Linking target lib/librte_timer.so.24.0 00:01:58.286 [600/707] Linking target lib/librte_cfgfile.so.24.0 00:01:58.286 [601/707] Linking target lib/librte_jobstats.so.24.0 00:01:58.286 [602/707] Linking target lib/librte_rawdev.so.24.0 00:01:58.286 [603/707] Linking target lib/librte_stack.so.24.0 00:01:58.286 [604/707] Linking target lib/librte_dmadev.so.24.0 00:01:58.286 [605/707] Linking target drivers/librte_bus_vdev.so.24.0 00:01:58.286 [606/707] Linking target lib/librte_acl.so.24.0 00:01:58.286 [607/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:58.286 [608/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:58.286 [609/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:58.286 [610/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:58.286 [611/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:58.286 [612/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:58.286 [613/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:58.286 [614/707] Linking target lib/librte_rcu.so.24.0 00:01:58.286 [615/707] Linking target lib/librte_mempool.so.24.0 00:01:58.544 [616/707] Linking target drivers/librte_bus_pci.so.24.0 00:01:58.544 [617/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:58.544 [618/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:58.544 [619/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:58.544 [620/707] Linking target lib/librte_rib.so.24.0 00:01:58.544 [621/707] Linking target drivers/librte_mempool_ring.so.24.0 00:01:58.544 [622/707] Linking target lib/librte_mbuf.so.24.0 00:01:58.802 [623/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:58.802 [624/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:58.802 [625/707] Linking target lib/librte_distributor.so.24.0 00:01:58.802 [626/707] Linking target lib/librte_compressdev.so.24.0 00:01:58.802 [627/707] Linking target lib/librte_bbdev.so.24.0 00:01:58.802 [628/707] Linking target lib/librte_reorder.so.24.0 00:01:58.802 [629/707] Linking target lib/librte_cryptodev.so.24.0 00:01:58.802 [630/707] Linking target lib/librte_net.so.24.0 00:01:58.802 [631/707] Linking target lib/librte_regexdev.so.24.0 00:01:58.802 [632/707] Linking target lib/librte_gpudev.so.24.0 00:01:58.802 [633/707] Linking target lib/librte_mldev.so.24.0 00:01:58.802 [634/707] Linking target lib/librte_fib.so.24.0 00:01:58.802 [635/707] Linking target lib/librte_sched.so.24.0 00:01:58.802 [636/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:58.802 [637/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:58.802 [638/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:58.802 [639/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:59.060 [640/707] Linking target lib/librte_cmdline.so.24.0 00:01:59.060 [641/707] Linking target lib/librte_hash.so.24.0 00:01:59.060 [642/707] Linking target lib/librte_security.so.24.0 00:01:59.060 [643/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:59.060 [644/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:59.060 [645/707] Linking target lib/librte_efd.so.24.0 00:01:59.060 [646/707] Linking target lib/librte_lpm.so.24.0 00:01:59.060 [647/707] Linking target lib/librte_member.so.24.0 00:01:59.060 [648/707] Linking target lib/librte_pdcp.so.24.0 00:01:59.060 [649/707] Linking target lib/librte_ipsec.so.24.0 00:01:59.319 [650/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:59.319 [651/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:00.255 [652/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.255 [653/707] Linking target lib/librte_ethdev.so.24.0 00:02:00.255 [654/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:00.255 [655/707] Linking target lib/librte_gso.so.24.0 00:02:00.255 [656/707] Linking target lib/librte_gro.so.24.0 00:02:00.255 [657/707] Linking target lib/librte_power.so.24.0 00:02:00.255 [658/707] Linking target lib/librte_metrics.so.24.0 00:02:00.255 [659/707] Linking target lib/librte_bpf.so.24.0 00:02:00.255 [660/707] Linking target lib/librte_pcapng.so.24.0 00:02:00.255 [661/707] Linking target lib/librte_ip_frag.so.24.0 00:02:00.255 [662/707] Linking target lib/librte_eventdev.so.24.0 00:02:00.514 [663/707] Linking target drivers/librte_net_i40e.so.24.0 00:02:00.514 [664/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:00.514 [665/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:00.514 [666/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:00.514 [667/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:00.514 [668/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:00.514 [669/707] Linking target lib/librte_graph.so.24.0 00:02:00.514 [670/707] Linking target lib/librte_latencystats.so.24.0 00:02:00.514 [671/707] Linking target lib/librte_bitratestats.so.24.0 00:02:00.514 [672/707] Linking target lib/librte_pdump.so.24.0 00:02:00.514 [673/707] Linking target lib/librte_dispatcher.so.24.0 00:02:00.514 [674/707] Linking target lib/librte_port.so.24.0 00:02:00.773 [675/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:00.773 [676/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:00.773 [677/707] Linking target lib/librte_node.so.24.0 00:02:00.773 [678/707] Linking target lib/librte_table.so.24.0 00:02:00.773 [679/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:03.310 [680/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:03.310 [681/707] Linking static target lib/librte_pipeline.a 00:02:03.310 [682/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:03.310 [683/707] Linking static target lib/librte_vhost.a 00:02:03.570 [684/707] Linking target app/dpdk-dumpcap 00:02:03.570 [685/707] Linking target app/dpdk-test-acl 00:02:03.570 [686/707] Linking target app/dpdk-test-fib 00:02:03.570 [687/707] Linking target app/dpdk-test-pipeline 00:02:03.570 [688/707] Linking target app/dpdk-graph 00:02:03.570 [689/707] Linking target app/dpdk-test-flow-perf 00:02:03.570 [690/707] Linking target app/dpdk-proc-info 00:02:03.570 [691/707] Linking target app/dpdk-test-gpudev 00:02:03.570 [692/707] Linking target app/dpdk-test-regex 00:02:03.570 [693/707] Linking target app/dpdk-test-mldev 00:02:03.570 [694/707] Linking target app/dpdk-test-crypto-perf 00:02:03.570 [695/707] Linking target app/dpdk-test-compress-perf 00:02:03.570 [696/707] Linking target app/dpdk-test-eventdev 00:02:03.570 [697/707] Linking target app/dpdk-test-cmdline 00:02:03.570 [698/707] Linking target app/dpdk-test-sad 00:02:03.570 [699/707] Linking target app/dpdk-test-dma-perf 00:02:03.570 [700/707] Linking target app/dpdk-pdump 00:02:03.570 [701/707] Linking target app/dpdk-test-security-perf 00:02:03.570 [702/707] Linking target app/dpdk-test-bbdev 00:02:03.570 [703/707] Linking target app/dpdk-testpmd 00:02:04.950 [704/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.950 [705/707] Linking target lib/librte_vhost.so.24.0 00:02:08.244 [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.244 [707/707] Linking target lib/librte_pipeline.so.24.0 00:02:08.244 10:10:52 build_native_dpdk -- common/autobuild_common.sh@188 -- $ uname -s 00:02:08.244 10:10:52 build_native_dpdk -- common/autobuild_common.sh@188 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:08.244 10:10:52 build_native_dpdk -- common/autobuild_common.sh@201 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 install 00:02:08.244 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:08.244 [0/1] Installing files. 00:02:08.244 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:08.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:08.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:08.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:08.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:08.248 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.249 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:08.250 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:08.250 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.250 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:08.251 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:08.251 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:08.251 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.251 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:08.251 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.251 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.251 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.251 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.251 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.251 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.251 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.251 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.251 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.251 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.251 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.251 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.251 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.251 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.251 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.251 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.251 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.251 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.251 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.251 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.251 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.252 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.253 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:08.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:08.518 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:08.518 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:08.518 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:08.518 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:08.518 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:08.518 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:08.518 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:08.518 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:08.518 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:08.518 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:08.518 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:08.518 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:08.518 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:08.518 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:08.518 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:08.518 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:08.518 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:08.518 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:08.518 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:08.518 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:08.518 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:08.518 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:08.518 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:08.518 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:08.518 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:08.518 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:08.518 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:08.518 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:08.518 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:08.518 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:08.518 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:08.518 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:08.518 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:08.518 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:08.518 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:08.518 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:08.518 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:08.518 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:08.518 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:08.518 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:08.518 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:08.518 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:08.518 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:08.518 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:08.518 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:08.518 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:08.518 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:08.518 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:08.518 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:08.518 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:08.518 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:08.518 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:08.518 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:08.518 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:08.518 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:08.518 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:08.519 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:08.519 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:08.519 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:08.519 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:08.519 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:08.519 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:08.519 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:08.519 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:08.519 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:08.519 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:08.519 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:08.519 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:08.519 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:08.519 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:08.519 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:08.519 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:08.519 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:08.519 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:08.519 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:08.519 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:08.519 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:08.519 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:08.519 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:08.519 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:08.519 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:08.519 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:08.519 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:08.519 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:08.519 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:08.519 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:08.519 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:08.519 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:08.519 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:08.519 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:08.519 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:08.519 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:08.519 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:08.519 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:08.519 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:08.519 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:08.519 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:08.519 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:08.519 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:08.519 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:08.519 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:08.519 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:08.519 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:08.519 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:08.519 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:08.519 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:08.519 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:08.519 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:08.519 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:08.519 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:08.519 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:08.519 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:08.519 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:08.519 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:08.519 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:08.519 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:08.519 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:08.519 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:08.519 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:08.519 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:08.519 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:08.519 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:08.519 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:08.519 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:08.519 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:08.519 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:08.519 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:08.519 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:08.519 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:08.519 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:08.519 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:08.519 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:08.519 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:08.519 10:10:53 build_native_dpdk -- common/autobuild_common.sh@207 -- $ cat 00:02:08.519 10:10:53 build_native_dpdk -- common/autobuild_common.sh@212 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:08.519 00:02:08.519 real 0m27.861s 00:02:08.519 user 8m27.658s 00:02:08.519 sys 1m56.317s 00:02:08.519 10:10:53 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:08.519 10:10:53 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:08.519 ************************************ 00:02:08.519 END TEST build_native_dpdk 00:02:08.519 ************************************ 00:02:08.519 10:10:53 -- common/autotest_common.sh@1142 -- $ return 0 00:02:08.519 10:10:53 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:08.519 10:10:53 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:08.519 10:10:53 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:08.519 10:10:53 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:08.519 10:10:53 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:08.519 10:10:53 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:08.519 10:10:53 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:08.519 10:10:53 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:08.519 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:08.778 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.778 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.778 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:09.346 Using 'verbs' RDMA provider 00:02:22.169 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:34.382 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:34.382 Creating mk/config.mk...done. 00:02:34.382 Creating mk/cc.flags.mk...done. 00:02:34.382 Type 'make' to build. 00:02:34.382 10:11:18 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:02:34.382 10:11:18 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:34.382 10:11:18 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:34.382 10:11:18 -- common/autotest_common.sh@10 -- $ set +x 00:02:34.382 ************************************ 00:02:34.382 START TEST make 00:02:34.382 ************************************ 00:02:34.382 10:11:18 make -- common/autotest_common.sh@1123 -- $ make -j96 00:02:34.382 make[1]: Nothing to be done for 'all'. 00:02:35.326 The Meson build system 00:02:35.326 Version: 1.3.1 00:02:35.326 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:35.326 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:35.326 Build type: native build 00:02:35.326 Project name: libvfio-user 00:02:35.326 Project version: 0.0.1 00:02:35.326 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:35.326 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:35.326 Host machine cpu family: x86_64 00:02:35.326 Host machine cpu: x86_64 00:02:35.326 Run-time dependency threads found: YES 00:02:35.326 Library dl found: YES 00:02:35.326 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:35.326 Run-time dependency json-c found: YES 0.17 00:02:35.326 Run-time dependency cmocka found: YES 1.1.7 00:02:35.326 Program pytest-3 found: NO 00:02:35.326 Program flake8 found: NO 00:02:35.326 Program misspell-fixer found: NO 00:02:35.326 Program restructuredtext-lint found: NO 00:02:35.326 Program valgrind found: YES (/usr/bin/valgrind) 00:02:35.326 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:35.326 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:35.326 Compiler for C supports arguments -Wwrite-strings: YES 00:02:35.327 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:35.327 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:35.327 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:35.327 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:35.327 Build targets in project: 8 00:02:35.327 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:35.327 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:35.327 00:02:35.327 libvfio-user 0.0.1 00:02:35.327 00:02:35.327 User defined options 00:02:35.327 buildtype : debug 00:02:35.327 default_library: shared 00:02:35.327 libdir : /usr/local/lib 00:02:35.327 00:02:35.327 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:35.891 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:35.891 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:35.891 [2/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:35.891 [3/37] Compiling C object samples/null.p/null.c.o 00:02:35.891 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:35.891 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:35.891 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:35.891 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:35.891 [8/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:35.891 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:35.891 [10/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:35.891 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:35.891 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:35.891 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:35.891 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:35.891 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:36.148 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:36.148 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:36.148 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:36.148 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:36.148 [20/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:36.148 [21/37] Compiling C object samples/server.p/server.c.o 00:02:36.148 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:36.148 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:36.148 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:36.148 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:36.148 [26/37] Compiling C object samples/client.p/client.c.o 00:02:36.148 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:36.148 [28/37] Linking target samples/client 00:02:36.148 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:36.148 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:36.148 [31/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:36.148 [32/37] Linking target test/unit_tests 00:02:36.148 [33/37] Linking target samples/null 00:02:36.148 [34/37] Linking target samples/gpio-pci-idio-16 00:02:36.406 [35/37] Linking target samples/lspci 00:02:36.406 [36/37] Linking target samples/server 00:02:36.406 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:36.406 INFO: autodetecting backend as ninja 00:02:36.406 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:36.406 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:36.665 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:36.665 ninja: no work to do. 00:02:44.780 CC lib/log/log.o 00:02:44.780 CC lib/log/log_flags.o 00:02:44.780 CC lib/log/log_deprecated.o 00:02:44.780 CC lib/ut_mock/mock.o 00:02:44.780 CC lib/ut/ut.o 00:02:45.039 LIB libspdk_log.a 00:02:45.039 LIB libspdk_ut.a 00:02:45.039 LIB libspdk_ut_mock.a 00:02:45.039 SO libspdk_log.so.7.0 00:02:45.039 SO libspdk_ut.so.2.0 00:02:45.039 SO libspdk_ut_mock.so.6.0 00:02:45.039 SYMLINK libspdk_ut_mock.so 00:02:45.039 SYMLINK libspdk_ut.so 00:02:45.039 SYMLINK libspdk_log.so 00:02:45.298 CC lib/dma/dma.o 00:02:45.298 CC lib/ioat/ioat.o 00:02:45.298 CC lib/util/base64.o 00:02:45.298 CC lib/util/bit_array.o 00:02:45.298 CC lib/util/cpuset.o 00:02:45.298 CC lib/util/crc16.o 00:02:45.298 CXX lib/trace_parser/trace.o 00:02:45.298 CC lib/util/crc32.o 00:02:45.298 CC lib/util/crc32c.o 00:02:45.298 CC lib/util/crc32_ieee.o 00:02:45.298 CC lib/util/crc64.o 00:02:45.298 CC lib/util/dif.o 00:02:45.298 CC lib/util/fd.o 00:02:45.298 CC lib/util/file.o 00:02:45.298 CC lib/util/hexlify.o 00:02:45.298 CC lib/util/iov.o 00:02:45.298 CC lib/util/math.o 00:02:45.298 CC lib/util/pipe.o 00:02:45.298 CC lib/util/strerror_tls.o 00:02:45.298 CC lib/util/string.o 00:02:45.298 CC lib/util/uuid.o 00:02:45.298 CC lib/util/fd_group.o 00:02:45.298 CC lib/util/xor.o 00:02:45.298 CC lib/util/zipf.o 00:02:45.558 CC lib/vfio_user/host/vfio_user_pci.o 00:02:45.558 CC lib/vfio_user/host/vfio_user.o 00:02:45.558 LIB libspdk_dma.a 00:02:45.558 SO libspdk_dma.so.4.0 00:02:45.558 LIB libspdk_ioat.a 00:02:45.558 SYMLINK libspdk_dma.so 00:02:45.558 SO libspdk_ioat.so.7.0 00:02:45.558 SYMLINK libspdk_ioat.so 00:02:45.817 LIB libspdk_vfio_user.a 00:02:45.817 SO libspdk_vfio_user.so.5.0 00:02:45.817 LIB libspdk_util.a 00:02:45.817 SYMLINK libspdk_vfio_user.so 00:02:45.817 SO libspdk_util.so.9.1 00:02:46.077 SYMLINK libspdk_util.so 00:02:46.077 LIB libspdk_trace_parser.a 00:02:46.077 SO libspdk_trace_parser.so.5.0 00:02:46.077 SYMLINK libspdk_trace_parser.so 00:02:46.336 CC lib/conf/conf.o 00:02:46.336 CC lib/idxd/idxd.o 00:02:46.336 CC lib/idxd/idxd_user.o 00:02:46.336 CC lib/idxd/idxd_kernel.o 00:02:46.336 CC lib/rdma_utils/rdma_utils.o 00:02:46.336 CC lib/json/json_parse.o 00:02:46.336 CC lib/vmd/vmd.o 00:02:46.336 CC lib/json/json_util.o 00:02:46.336 CC lib/vmd/led.o 00:02:46.336 CC lib/json/json_write.o 00:02:46.336 CC lib/env_dpdk/env.o 00:02:46.336 CC lib/rdma_provider/common.o 00:02:46.336 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:46.336 CC lib/env_dpdk/memory.o 00:02:46.336 CC lib/env_dpdk/pci.o 00:02:46.336 CC lib/env_dpdk/init.o 00:02:46.336 CC lib/env_dpdk/threads.o 00:02:46.336 CC lib/env_dpdk/pci_ioat.o 00:02:46.336 CC lib/env_dpdk/pci_virtio.o 00:02:46.336 CC lib/env_dpdk/pci_vmd.o 00:02:46.336 CC lib/env_dpdk/pci_idxd.o 00:02:46.337 CC lib/env_dpdk/pci_event.o 00:02:46.337 CC lib/env_dpdk/sigbus_handler.o 00:02:46.337 CC lib/env_dpdk/pci_dpdk.o 00:02:46.337 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:46.337 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:46.596 LIB libspdk_conf.a 00:02:46.596 LIB libspdk_rdma_provider.a 00:02:46.596 LIB libspdk_rdma_utils.a 00:02:46.596 SO libspdk_conf.so.6.0 00:02:46.596 SO libspdk_rdma_provider.so.6.0 00:02:46.596 SO libspdk_rdma_utils.so.1.0 00:02:46.596 LIB libspdk_json.a 00:02:46.596 SYMLINK libspdk_conf.so 00:02:46.596 SYMLINK libspdk_rdma_provider.so 00:02:46.596 SO libspdk_json.so.6.0 00:02:46.596 SYMLINK libspdk_rdma_utils.so 00:02:46.596 SYMLINK libspdk_json.so 00:02:46.856 LIB libspdk_idxd.a 00:02:46.856 SO libspdk_idxd.so.12.0 00:02:46.856 LIB libspdk_vmd.a 00:02:46.856 SO libspdk_vmd.so.6.0 00:02:46.856 SYMLINK libspdk_idxd.so 00:02:46.856 SYMLINK libspdk_vmd.so 00:02:46.856 CC lib/jsonrpc/jsonrpc_server.o 00:02:46.856 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:46.856 CC lib/jsonrpc/jsonrpc_client.o 00:02:46.856 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:47.115 LIB libspdk_jsonrpc.a 00:02:47.115 SO libspdk_jsonrpc.so.6.0 00:02:47.374 SYMLINK libspdk_jsonrpc.so 00:02:47.374 LIB libspdk_env_dpdk.a 00:02:47.374 SO libspdk_env_dpdk.so.14.1 00:02:47.374 SYMLINK libspdk_env_dpdk.so 00:02:47.633 CC lib/rpc/rpc.o 00:02:47.893 LIB libspdk_rpc.a 00:02:47.893 SO libspdk_rpc.so.6.0 00:02:47.893 SYMLINK libspdk_rpc.so 00:02:48.153 CC lib/trace/trace.o 00:02:48.153 CC lib/trace/trace_flags.o 00:02:48.153 CC lib/notify/notify.o 00:02:48.153 CC lib/keyring/keyring.o 00:02:48.153 CC lib/trace/trace_rpc.o 00:02:48.153 CC lib/keyring/keyring_rpc.o 00:02:48.153 CC lib/notify/notify_rpc.o 00:02:48.413 LIB libspdk_notify.a 00:02:48.413 SO libspdk_notify.so.6.0 00:02:48.413 LIB libspdk_keyring.a 00:02:48.413 LIB libspdk_trace.a 00:02:48.413 SO libspdk_keyring.so.1.0 00:02:48.413 SO libspdk_trace.so.10.0 00:02:48.413 SYMLINK libspdk_notify.so 00:02:48.413 SYMLINK libspdk_keyring.so 00:02:48.413 SYMLINK libspdk_trace.so 00:02:48.983 CC lib/thread/thread.o 00:02:48.983 CC lib/thread/iobuf.o 00:02:48.983 CC lib/sock/sock.o 00:02:48.983 CC lib/sock/sock_rpc.o 00:02:49.334 LIB libspdk_sock.a 00:02:49.334 SO libspdk_sock.so.10.0 00:02:49.334 SYMLINK libspdk_sock.so 00:02:49.593 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:49.593 CC lib/nvme/nvme_ctrlr.o 00:02:49.593 CC lib/nvme/nvme_fabric.o 00:02:49.593 CC lib/nvme/nvme_ns_cmd.o 00:02:49.593 CC lib/nvme/nvme_ns.o 00:02:49.593 CC lib/nvme/nvme_pcie_common.o 00:02:49.593 CC lib/nvme/nvme_pcie.o 00:02:49.593 CC lib/nvme/nvme_qpair.o 00:02:49.593 CC lib/nvme/nvme.o 00:02:49.593 CC lib/nvme/nvme_quirks.o 00:02:49.593 CC lib/nvme/nvme_transport.o 00:02:49.593 CC lib/nvme/nvme_discovery.o 00:02:49.593 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:49.593 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:49.593 CC lib/nvme/nvme_tcp.o 00:02:49.593 CC lib/nvme/nvme_opal.o 00:02:49.593 CC lib/nvme/nvme_io_msg.o 00:02:49.593 CC lib/nvme/nvme_poll_group.o 00:02:49.593 CC lib/nvme/nvme_zns.o 00:02:49.593 CC lib/nvme/nvme_stubs.o 00:02:49.593 CC lib/nvme/nvme_auth.o 00:02:49.593 CC lib/nvme/nvme_cuse.o 00:02:49.593 CC lib/nvme/nvme_vfio_user.o 00:02:49.593 CC lib/nvme/nvme_rdma.o 00:02:49.852 LIB libspdk_thread.a 00:02:49.852 SO libspdk_thread.so.10.1 00:02:49.852 SYMLINK libspdk_thread.so 00:02:50.111 CC lib/init/json_config.o 00:02:50.111 CC lib/blob/blobstore.o 00:02:50.111 CC lib/init/subsystem_rpc.o 00:02:50.111 CC lib/init/subsystem.o 00:02:50.111 CC lib/virtio/virtio.o 00:02:50.111 CC lib/blob/request.o 00:02:50.111 CC lib/virtio/virtio_vhost_user.o 00:02:50.111 CC lib/init/rpc.o 00:02:50.111 CC lib/blob/blob_bs_dev.o 00:02:50.111 CC lib/blob/zeroes.o 00:02:50.111 CC lib/virtio/virtio_vfio_user.o 00:02:50.111 CC lib/accel/accel_rpc.o 00:02:50.111 CC lib/accel/accel.o 00:02:50.111 CC lib/virtio/virtio_pci.o 00:02:50.111 CC lib/accel/accel_sw.o 00:02:50.111 CC lib/vfu_tgt/tgt_rpc.o 00:02:50.111 CC lib/vfu_tgt/tgt_endpoint.o 00:02:50.369 LIB libspdk_init.a 00:02:50.369 SO libspdk_init.so.5.0 00:02:50.369 LIB libspdk_vfu_tgt.a 00:02:50.627 LIB libspdk_virtio.a 00:02:50.627 SO libspdk_vfu_tgt.so.3.0 00:02:50.627 SYMLINK libspdk_init.so 00:02:50.627 SO libspdk_virtio.so.7.0 00:02:50.627 SYMLINK libspdk_vfu_tgt.so 00:02:50.627 SYMLINK libspdk_virtio.so 00:02:50.887 CC lib/event/app.o 00:02:50.887 CC lib/event/reactor.o 00:02:50.887 CC lib/event/log_rpc.o 00:02:50.887 CC lib/event/app_rpc.o 00:02:50.887 CC lib/event/scheduler_static.o 00:02:50.887 LIB libspdk_accel.a 00:02:50.887 SO libspdk_accel.so.15.1 00:02:51.147 SYMLINK libspdk_accel.so 00:02:51.147 LIB libspdk_event.a 00:02:51.147 LIB libspdk_nvme.a 00:02:51.147 SO libspdk_event.so.14.0 00:02:51.147 SO libspdk_nvme.so.13.1 00:02:51.147 SYMLINK libspdk_event.so 00:02:51.406 CC lib/bdev/bdev.o 00:02:51.406 CC lib/bdev/bdev_rpc.o 00:02:51.406 CC lib/bdev/bdev_zone.o 00:02:51.406 CC lib/bdev/part.o 00:02:51.406 CC lib/bdev/scsi_nvme.o 00:02:51.406 SYMLINK libspdk_nvme.so 00:02:52.343 LIB libspdk_blob.a 00:02:52.343 SO libspdk_blob.so.11.0 00:02:52.343 SYMLINK libspdk_blob.so 00:02:52.602 CC lib/blobfs/blobfs.o 00:02:52.602 CC lib/blobfs/tree.o 00:02:52.602 CC lib/lvol/lvol.o 00:02:53.169 LIB libspdk_bdev.a 00:02:53.169 SO libspdk_bdev.so.15.1 00:02:53.169 SYMLINK libspdk_bdev.so 00:02:53.429 LIB libspdk_blobfs.a 00:02:53.429 SO libspdk_blobfs.so.10.0 00:02:53.429 LIB libspdk_lvol.a 00:02:53.429 SO libspdk_lvol.so.10.0 00:02:53.429 SYMLINK libspdk_blobfs.so 00:02:53.429 SYMLINK libspdk_lvol.so 00:02:53.691 CC lib/ftl/ftl_core.o 00:02:53.691 CC lib/ftl/ftl_init.o 00:02:53.691 CC lib/ftl/ftl_layout.o 00:02:53.691 CC lib/ublk/ublk.o 00:02:53.691 CC lib/ftl/ftl_debug.o 00:02:53.691 CC lib/ublk/ublk_rpc.o 00:02:53.691 CC lib/ftl/ftl_io.o 00:02:53.691 CC lib/ftl/ftl_sb.o 00:02:53.691 CC lib/ftl/ftl_l2p.o 00:02:53.691 CC lib/nbd/nbd.o 00:02:53.691 CC lib/ftl/ftl_l2p_flat.o 00:02:53.691 CC lib/scsi/dev.o 00:02:53.691 CC lib/nvmf/ctrlr.o 00:02:53.691 CC lib/nbd/nbd_rpc.o 00:02:53.691 CC lib/ftl/ftl_nv_cache.o 00:02:53.691 CC lib/scsi/lun.o 00:02:53.691 CC lib/nvmf/ctrlr_discovery.o 00:02:53.691 CC lib/ftl/ftl_band.o 00:02:53.691 CC lib/scsi/port.o 00:02:53.691 CC lib/nvmf/ctrlr_bdev.o 00:02:53.691 CC lib/scsi/scsi.o 00:02:53.691 CC lib/nvmf/subsystem.o 00:02:53.691 CC lib/ftl/ftl_band_ops.o 00:02:53.691 CC lib/ftl/ftl_writer.o 00:02:53.691 CC lib/scsi/scsi_pr.o 00:02:53.691 CC lib/scsi/scsi_bdev.o 00:02:53.691 CC lib/nvmf/nvmf_rpc.o 00:02:53.691 CC lib/nvmf/nvmf.o 00:02:53.691 CC lib/ftl/ftl_rq.o 00:02:53.691 CC lib/scsi/task.o 00:02:53.691 CC lib/nvmf/transport.o 00:02:53.691 CC lib/ftl/ftl_l2p_cache.o 00:02:53.691 CC lib/ftl/ftl_reloc.o 00:02:53.691 CC lib/scsi/scsi_rpc.o 00:02:53.691 CC lib/nvmf/tcp.o 00:02:53.691 CC lib/ftl/ftl_p2l.o 00:02:53.691 CC lib/nvmf/stubs.o 00:02:53.691 CC lib/ftl/mngt/ftl_mngt.o 00:02:53.691 CC lib/nvmf/mdns_server.o 00:02:53.691 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:53.691 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:53.691 CC lib/nvmf/vfio_user.o 00:02:53.691 CC lib/nvmf/rdma.o 00:02:53.691 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:53.691 CC lib/nvmf/auth.o 00:02:53.691 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:53.691 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:53.691 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:53.691 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:53.691 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:53.691 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:53.691 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:53.691 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:53.691 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:53.691 CC lib/ftl/utils/ftl_conf.o 00:02:53.691 CC lib/ftl/utils/ftl_md.o 00:02:53.691 CC lib/ftl/utils/ftl_mempool.o 00:02:53.691 CC lib/ftl/utils/ftl_bitmap.o 00:02:53.691 CC lib/ftl/utils/ftl_property.o 00:02:53.691 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:53.691 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:53.691 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:53.691 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:53.691 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:53.691 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:53.691 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:53.691 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:53.691 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:53.691 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:53.691 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:53.691 CC lib/ftl/base/ftl_base_bdev.o 00:02:53.691 CC lib/ftl/ftl_trace.o 00:02:53.691 CC lib/ftl/base/ftl_base_dev.o 00:02:53.949 LIB libspdk_nbd.a 00:02:54.208 SO libspdk_nbd.so.7.0 00:02:54.208 SYMLINK libspdk_nbd.so 00:02:54.208 LIB libspdk_scsi.a 00:02:54.208 SO libspdk_scsi.so.9.0 00:02:54.208 SYMLINK libspdk_scsi.so 00:02:54.466 LIB libspdk_ublk.a 00:02:54.466 SO libspdk_ublk.so.3.0 00:02:54.466 SYMLINK libspdk_ublk.so 00:02:54.724 LIB libspdk_ftl.a 00:02:54.724 CC lib/iscsi/conn.o 00:02:54.724 CC lib/iscsi/init_grp.o 00:02:54.724 CC lib/iscsi/iscsi.o 00:02:54.724 CC lib/iscsi/md5.o 00:02:54.724 CC lib/iscsi/param.o 00:02:54.724 CC lib/iscsi/portal_grp.o 00:02:54.724 CC lib/iscsi/tgt_node.o 00:02:54.724 CC lib/vhost/vhost.o 00:02:54.724 CC lib/iscsi/iscsi_subsystem.o 00:02:54.724 CC lib/iscsi/iscsi_rpc.o 00:02:54.724 CC lib/vhost/vhost_rpc.o 00:02:54.724 CC lib/iscsi/task.o 00:02:54.724 CC lib/vhost/vhost_scsi.o 00:02:54.724 CC lib/vhost/vhost_blk.o 00:02:54.724 CC lib/vhost/rte_vhost_user.o 00:02:54.724 SO libspdk_ftl.so.9.0 00:02:54.982 SYMLINK libspdk_ftl.so 00:02:55.240 LIB libspdk_nvmf.a 00:02:55.240 SO libspdk_nvmf.so.18.1 00:02:55.499 LIB libspdk_vhost.a 00:02:55.499 SO libspdk_vhost.so.8.0 00:02:55.499 SYMLINK libspdk_nvmf.so 00:02:55.499 SYMLINK libspdk_vhost.so 00:02:55.499 LIB libspdk_iscsi.a 00:02:55.757 SO libspdk_iscsi.so.8.0 00:02:55.757 SYMLINK libspdk_iscsi.so 00:02:56.324 CC module/env_dpdk/env_dpdk_rpc.o 00:02:56.324 CC module/vfu_device/vfu_virtio_blk.o 00:02:56.324 CC module/vfu_device/vfu_virtio.o 00:02:56.324 CC module/vfu_device/vfu_virtio_scsi.o 00:02:56.324 CC module/vfu_device/vfu_virtio_rpc.o 00:02:56.324 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:56.324 CC module/accel/dsa/accel_dsa.o 00:02:56.324 CC module/accel/ioat/accel_ioat.o 00:02:56.324 CC module/accel/ioat/accel_ioat_rpc.o 00:02:56.324 CC module/keyring/linux/keyring.o 00:02:56.324 CC module/accel/dsa/accel_dsa_rpc.o 00:02:56.324 CC module/keyring/linux/keyring_rpc.o 00:02:56.324 CC module/accel/error/accel_error.o 00:02:56.324 LIB libspdk_env_dpdk_rpc.a 00:02:56.324 CC module/keyring/file/keyring_rpc.o 00:02:56.324 CC module/accel/error/accel_error_rpc.o 00:02:56.324 CC module/keyring/file/keyring.o 00:02:56.324 CC module/blob/bdev/blob_bdev.o 00:02:56.324 CC module/sock/posix/posix.o 00:02:56.324 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:56.324 CC module/accel/iaa/accel_iaa.o 00:02:56.324 CC module/accel/iaa/accel_iaa_rpc.o 00:02:56.324 CC module/scheduler/gscheduler/gscheduler.o 00:02:56.582 SO libspdk_env_dpdk_rpc.so.6.0 00:02:56.582 SYMLINK libspdk_env_dpdk_rpc.so 00:02:56.582 LIB libspdk_keyring_file.a 00:02:56.582 LIB libspdk_keyring_linux.a 00:02:56.582 LIB libspdk_accel_ioat.a 00:02:56.582 LIB libspdk_scheduler_dpdk_governor.a 00:02:56.582 LIB libspdk_scheduler_dynamic.a 00:02:56.582 LIB libspdk_scheduler_gscheduler.a 00:02:56.582 SO libspdk_keyring_file.so.1.0 00:02:56.582 SO libspdk_accel_ioat.so.6.0 00:02:56.582 LIB libspdk_accel_error.a 00:02:56.582 SO libspdk_keyring_linux.so.1.0 00:02:56.582 SO libspdk_scheduler_dynamic.so.4.0 00:02:56.582 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:56.582 SO libspdk_scheduler_gscheduler.so.4.0 00:02:56.582 LIB libspdk_accel_iaa.a 00:02:56.582 LIB libspdk_blob_bdev.a 00:02:56.582 SO libspdk_accel_error.so.2.0 00:02:56.582 LIB libspdk_accel_dsa.a 00:02:56.582 SYMLINK libspdk_keyring_file.so 00:02:56.582 SYMLINK libspdk_accel_ioat.so 00:02:56.582 SO libspdk_accel_iaa.so.3.0 00:02:56.582 SYMLINK libspdk_scheduler_dynamic.so 00:02:56.582 SYMLINK libspdk_scheduler_gscheduler.so 00:02:56.582 SYMLINK libspdk_keyring_linux.so 00:02:56.582 SO libspdk_blob_bdev.so.11.0 00:02:56.582 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:56.582 SO libspdk_accel_dsa.so.5.0 00:02:56.840 SYMLINK libspdk_accel_error.so 00:02:56.840 SYMLINK libspdk_blob_bdev.so 00:02:56.840 SYMLINK libspdk_accel_iaa.so 00:02:56.840 SYMLINK libspdk_accel_dsa.so 00:02:56.840 LIB libspdk_vfu_device.a 00:02:56.840 SO libspdk_vfu_device.so.3.0 00:02:56.840 SYMLINK libspdk_vfu_device.so 00:02:57.099 LIB libspdk_sock_posix.a 00:02:57.099 SO libspdk_sock_posix.so.6.0 00:02:57.099 SYMLINK libspdk_sock_posix.so 00:02:57.099 CC module/bdev/delay/vbdev_delay.o 00:02:57.099 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:57.099 CC module/bdev/gpt/vbdev_gpt.o 00:02:57.099 CC module/bdev/gpt/gpt.o 00:02:57.099 CC module/bdev/raid/bdev_raid.o 00:02:57.099 CC module/bdev/null/bdev_null.o 00:02:57.099 CC module/bdev/null/bdev_null_rpc.o 00:02:57.099 CC module/bdev/raid/bdev_raid_rpc.o 00:02:57.099 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:57.099 CC module/bdev/ftl/bdev_ftl.o 00:02:57.099 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:57.099 CC module/bdev/error/vbdev_error.o 00:02:57.099 CC module/bdev/raid/raid0.o 00:02:57.099 CC module/blobfs/bdev/blobfs_bdev.o 00:02:57.099 CC module/bdev/raid/bdev_raid_sb.o 00:02:57.099 CC module/bdev/error/vbdev_error_rpc.o 00:02:57.099 CC module/bdev/raid/raid1.o 00:02:57.099 CC module/bdev/raid/concat.o 00:02:57.099 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:57.099 CC module/bdev/passthru/vbdev_passthru.o 00:02:57.099 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:57.099 CC module/bdev/nvme/bdev_nvme.o 00:02:57.099 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:57.099 CC module/bdev/lvol/vbdev_lvol.o 00:02:57.099 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:57.099 CC module/bdev/nvme/nvme_rpc.o 00:02:57.099 CC module/bdev/malloc/bdev_malloc.o 00:02:57.099 CC module/bdev/nvme/bdev_mdns_client.o 00:02:57.099 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:57.099 CC module/bdev/nvme/vbdev_opal.o 00:02:57.099 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:57.099 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:57.099 CC module/bdev/split/vbdev_split.o 00:02:57.099 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:57.099 CC module/bdev/split/vbdev_split_rpc.o 00:02:57.099 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:57.099 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:57.099 CC module/bdev/iscsi/bdev_iscsi.o 00:02:57.099 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:57.099 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:57.099 CC module/bdev/aio/bdev_aio.o 00:02:57.099 CC module/bdev/aio/bdev_aio_rpc.o 00:02:57.357 LIB libspdk_blobfs_bdev.a 00:02:57.357 SO libspdk_blobfs_bdev.so.6.0 00:02:57.616 LIB libspdk_bdev_split.a 00:02:57.616 LIB libspdk_bdev_error.a 00:02:57.616 LIB libspdk_bdev_gpt.a 00:02:57.616 LIB libspdk_bdev_ftl.a 00:02:57.616 SO libspdk_bdev_split.so.6.0 00:02:57.616 SO libspdk_bdev_gpt.so.6.0 00:02:57.616 SO libspdk_bdev_error.so.6.0 00:02:57.616 LIB libspdk_bdev_null.a 00:02:57.616 SYMLINK libspdk_blobfs_bdev.so 00:02:57.616 LIB libspdk_bdev_passthru.a 00:02:57.616 SO libspdk_bdev_ftl.so.6.0 00:02:57.616 SO libspdk_bdev_null.so.6.0 00:02:57.616 SO libspdk_bdev_passthru.so.6.0 00:02:57.616 LIB libspdk_bdev_delay.a 00:02:57.616 SYMLINK libspdk_bdev_split.so 00:02:57.616 SYMLINK libspdk_bdev_gpt.so 00:02:57.616 SYMLINK libspdk_bdev_error.so 00:02:57.616 SO libspdk_bdev_delay.so.6.0 00:02:57.616 LIB libspdk_bdev_aio.a 00:02:57.616 LIB libspdk_bdev_zone_block.a 00:02:57.616 SYMLINK libspdk_bdev_ftl.so 00:02:57.616 LIB libspdk_bdev_malloc.a 00:02:57.616 SYMLINK libspdk_bdev_passthru.so 00:02:57.616 SYMLINK libspdk_bdev_null.so 00:02:57.616 LIB libspdk_bdev_iscsi.a 00:02:57.616 SO libspdk_bdev_zone_block.so.6.0 00:02:57.616 SO libspdk_bdev_malloc.so.6.0 00:02:57.616 SO libspdk_bdev_aio.so.6.0 00:02:57.616 SO libspdk_bdev_iscsi.so.6.0 00:02:57.616 SYMLINK libspdk_bdev_delay.so 00:02:57.616 SYMLINK libspdk_bdev_malloc.so 00:02:57.616 SYMLINK libspdk_bdev_zone_block.so 00:02:57.616 SYMLINK libspdk_bdev_aio.so 00:02:57.616 SYMLINK libspdk_bdev_iscsi.so 00:02:57.616 LIB libspdk_bdev_lvol.a 00:02:57.616 LIB libspdk_bdev_virtio.a 00:02:57.875 SO libspdk_bdev_lvol.so.6.0 00:02:57.875 SO libspdk_bdev_virtio.so.6.0 00:02:57.875 SYMLINK libspdk_bdev_lvol.so 00:02:57.875 SYMLINK libspdk_bdev_virtio.so 00:02:58.134 LIB libspdk_bdev_raid.a 00:02:58.134 SO libspdk_bdev_raid.so.6.0 00:02:58.134 SYMLINK libspdk_bdev_raid.so 00:02:58.702 LIB libspdk_bdev_nvme.a 00:02:58.961 SO libspdk_bdev_nvme.so.7.0 00:02:58.961 SYMLINK libspdk_bdev_nvme.so 00:02:59.531 CC module/event/subsystems/vmd/vmd.o 00:02:59.531 CC module/event/subsystems/iobuf/iobuf.o 00:02:59.531 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:59.531 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:59.531 CC module/event/subsystems/scheduler/scheduler.o 00:02:59.531 CC module/event/subsystems/sock/sock.o 00:02:59.531 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:59.531 CC module/event/subsystems/keyring/keyring.o 00:02:59.531 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:59.791 LIB libspdk_event_keyring.a 00:02:59.791 LIB libspdk_event_vfu_tgt.a 00:02:59.791 LIB libspdk_event_vmd.a 00:02:59.791 LIB libspdk_event_scheduler.a 00:02:59.791 LIB libspdk_event_iobuf.a 00:02:59.791 LIB libspdk_event_sock.a 00:02:59.791 LIB libspdk_event_vhost_blk.a 00:02:59.791 SO libspdk_event_keyring.so.1.0 00:02:59.791 SO libspdk_event_vfu_tgt.so.3.0 00:02:59.791 SO libspdk_event_vmd.so.6.0 00:02:59.791 SO libspdk_event_scheduler.so.4.0 00:02:59.791 SO libspdk_event_iobuf.so.3.0 00:02:59.791 SO libspdk_event_sock.so.5.0 00:02:59.791 SO libspdk_event_vhost_blk.so.3.0 00:02:59.791 SYMLINK libspdk_event_keyring.so 00:02:59.791 SYMLINK libspdk_event_vfu_tgt.so 00:02:59.791 SYMLINK libspdk_event_vmd.so 00:02:59.791 SYMLINK libspdk_event_sock.so 00:02:59.791 SYMLINK libspdk_event_scheduler.so 00:02:59.791 SYMLINK libspdk_event_vhost_blk.so 00:02:59.791 SYMLINK libspdk_event_iobuf.so 00:03:00.050 CC module/event/subsystems/accel/accel.o 00:03:00.309 LIB libspdk_event_accel.a 00:03:00.309 SO libspdk_event_accel.so.6.0 00:03:00.309 SYMLINK libspdk_event_accel.so 00:03:00.878 CC module/event/subsystems/bdev/bdev.o 00:03:00.878 LIB libspdk_event_bdev.a 00:03:00.878 SO libspdk_event_bdev.so.6.0 00:03:00.878 SYMLINK libspdk_event_bdev.so 00:03:01.447 CC module/event/subsystems/scsi/scsi.o 00:03:01.447 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:01.447 CC module/event/subsystems/nbd/nbd.o 00:03:01.447 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:01.447 CC module/event/subsystems/ublk/ublk.o 00:03:01.447 LIB libspdk_event_nbd.a 00:03:01.447 LIB libspdk_event_ublk.a 00:03:01.447 LIB libspdk_event_scsi.a 00:03:01.447 SO libspdk_event_nbd.so.6.0 00:03:01.447 SO libspdk_event_ublk.so.3.0 00:03:01.447 SO libspdk_event_scsi.so.6.0 00:03:01.447 LIB libspdk_event_nvmf.a 00:03:01.447 SYMLINK libspdk_event_nbd.so 00:03:01.447 SYMLINK libspdk_event_ublk.so 00:03:01.447 SO libspdk_event_nvmf.so.6.0 00:03:01.447 SYMLINK libspdk_event_scsi.so 00:03:01.707 SYMLINK libspdk_event_nvmf.so 00:03:01.965 CC module/event/subsystems/iscsi/iscsi.o 00:03:01.965 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:01.965 LIB libspdk_event_vhost_scsi.a 00:03:01.965 LIB libspdk_event_iscsi.a 00:03:01.965 SO libspdk_event_vhost_scsi.so.3.0 00:03:01.965 SO libspdk_event_iscsi.so.6.0 00:03:01.965 SYMLINK libspdk_event_vhost_scsi.so 00:03:02.224 SYMLINK libspdk_event_iscsi.so 00:03:02.224 SO libspdk.so.6.0 00:03:02.224 SYMLINK libspdk.so 00:03:02.484 CXX app/trace/trace.o 00:03:02.752 CC app/spdk_lspci/spdk_lspci.o 00:03:02.752 CC app/trace_record/trace_record.o 00:03:02.752 CC test/rpc_client/rpc_client_test.o 00:03:02.752 CC app/spdk_nvme_discover/discovery_aer.o 00:03:02.752 TEST_HEADER include/spdk/accel.h 00:03:02.752 TEST_HEADER include/spdk/assert.h 00:03:02.752 TEST_HEADER include/spdk/accel_module.h 00:03:02.752 TEST_HEADER include/spdk/bdev.h 00:03:02.752 TEST_HEADER include/spdk/barrier.h 00:03:02.752 TEST_HEADER include/spdk/base64.h 00:03:02.752 CC app/spdk_top/spdk_top.o 00:03:02.752 TEST_HEADER include/spdk/bdev_zone.h 00:03:02.752 TEST_HEADER include/spdk/bdev_module.h 00:03:02.752 TEST_HEADER include/spdk/bit_array.h 00:03:02.752 CC app/spdk_nvme_identify/identify.o 00:03:02.752 TEST_HEADER include/spdk/bit_pool.h 00:03:02.752 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:02.752 CC app/spdk_nvme_perf/perf.o 00:03:02.752 TEST_HEADER include/spdk/blob_bdev.h 00:03:02.752 TEST_HEADER include/spdk/blobfs.h 00:03:02.752 TEST_HEADER include/spdk/conf.h 00:03:02.752 TEST_HEADER include/spdk/blob.h 00:03:02.752 TEST_HEADER include/spdk/cpuset.h 00:03:02.752 TEST_HEADER include/spdk/config.h 00:03:02.752 TEST_HEADER include/spdk/crc16.h 00:03:02.752 TEST_HEADER include/spdk/crc32.h 00:03:02.752 TEST_HEADER include/spdk/crc64.h 00:03:02.752 TEST_HEADER include/spdk/dif.h 00:03:02.752 TEST_HEADER include/spdk/dma.h 00:03:02.752 TEST_HEADER include/spdk/endian.h 00:03:02.752 TEST_HEADER include/spdk/env_dpdk.h 00:03:02.752 TEST_HEADER include/spdk/env.h 00:03:02.752 TEST_HEADER include/spdk/event.h 00:03:02.752 TEST_HEADER include/spdk/fd_group.h 00:03:02.752 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:02.752 TEST_HEADER include/spdk/fd.h 00:03:02.752 TEST_HEADER include/spdk/file.h 00:03:02.752 TEST_HEADER include/spdk/ftl.h 00:03:02.752 TEST_HEADER include/spdk/gpt_spec.h 00:03:02.752 TEST_HEADER include/spdk/histogram_data.h 00:03:02.752 TEST_HEADER include/spdk/hexlify.h 00:03:02.752 TEST_HEADER include/spdk/init.h 00:03:02.752 TEST_HEADER include/spdk/ioat.h 00:03:02.752 TEST_HEADER include/spdk/idxd.h 00:03:02.752 TEST_HEADER include/spdk/idxd_spec.h 00:03:02.752 TEST_HEADER include/spdk/ioat_spec.h 00:03:02.752 TEST_HEADER include/spdk/json.h 00:03:02.752 TEST_HEADER include/spdk/jsonrpc.h 00:03:02.752 TEST_HEADER include/spdk/iscsi_spec.h 00:03:02.752 TEST_HEADER include/spdk/keyring_module.h 00:03:02.752 TEST_HEADER include/spdk/likely.h 00:03:02.752 TEST_HEADER include/spdk/keyring.h 00:03:02.752 TEST_HEADER include/spdk/log.h 00:03:02.752 TEST_HEADER include/spdk/lvol.h 00:03:02.752 TEST_HEADER include/spdk/memory.h 00:03:02.752 TEST_HEADER include/spdk/mmio.h 00:03:02.752 TEST_HEADER include/spdk/nbd.h 00:03:02.752 TEST_HEADER include/spdk/notify.h 00:03:02.752 TEST_HEADER include/spdk/nvme_intel.h 00:03:02.752 TEST_HEADER include/spdk/nvme.h 00:03:02.752 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:02.752 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:02.752 CC app/spdk_dd/spdk_dd.o 00:03:02.752 CC app/nvmf_tgt/nvmf_main.o 00:03:02.752 TEST_HEADER include/spdk/nvme_spec.h 00:03:02.752 TEST_HEADER include/spdk/nvme_zns.h 00:03:02.752 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:02.752 TEST_HEADER include/spdk/nvmf_spec.h 00:03:02.752 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:02.752 TEST_HEADER include/spdk/nvmf.h 00:03:02.752 TEST_HEADER include/spdk/nvmf_transport.h 00:03:02.752 TEST_HEADER include/spdk/opal.h 00:03:02.752 TEST_HEADER include/spdk/opal_spec.h 00:03:02.752 TEST_HEADER include/spdk/pci_ids.h 00:03:02.752 TEST_HEADER include/spdk/pipe.h 00:03:02.752 TEST_HEADER include/spdk/reduce.h 00:03:02.752 CC app/spdk_tgt/spdk_tgt.o 00:03:02.752 TEST_HEADER include/spdk/queue.h 00:03:02.752 TEST_HEADER include/spdk/scheduler.h 00:03:02.752 CC app/iscsi_tgt/iscsi_tgt.o 00:03:02.752 TEST_HEADER include/spdk/rpc.h 00:03:02.752 TEST_HEADER include/spdk/scsi.h 00:03:02.752 TEST_HEADER include/spdk/scsi_spec.h 00:03:02.752 TEST_HEADER include/spdk/sock.h 00:03:02.752 TEST_HEADER include/spdk/stdinc.h 00:03:02.752 TEST_HEADER include/spdk/string.h 00:03:02.752 TEST_HEADER include/spdk/thread.h 00:03:02.752 TEST_HEADER include/spdk/trace.h 00:03:02.752 TEST_HEADER include/spdk/trace_parser.h 00:03:02.752 TEST_HEADER include/spdk/tree.h 00:03:02.752 TEST_HEADER include/spdk/ublk.h 00:03:02.752 TEST_HEADER include/spdk/util.h 00:03:02.752 TEST_HEADER include/spdk/version.h 00:03:02.752 TEST_HEADER include/spdk/uuid.h 00:03:02.752 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:02.752 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:02.752 TEST_HEADER include/spdk/vhost.h 00:03:02.752 TEST_HEADER include/spdk/vmd.h 00:03:02.752 TEST_HEADER include/spdk/zipf.h 00:03:02.752 TEST_HEADER include/spdk/xor.h 00:03:02.752 CXX test/cpp_headers/accel.o 00:03:02.752 CXX test/cpp_headers/accel_module.o 00:03:02.752 CXX test/cpp_headers/barrier.o 00:03:02.752 CXX test/cpp_headers/assert.o 00:03:02.752 CXX test/cpp_headers/base64.o 00:03:02.752 CXX test/cpp_headers/bdev.o 00:03:02.752 CXX test/cpp_headers/bdev_zone.o 00:03:02.752 CXX test/cpp_headers/bdev_module.o 00:03:02.752 CXX test/cpp_headers/bit_array.o 00:03:02.752 CXX test/cpp_headers/bit_pool.o 00:03:02.752 CXX test/cpp_headers/blob_bdev.o 00:03:02.752 CXX test/cpp_headers/blobfs.o 00:03:02.752 CXX test/cpp_headers/blobfs_bdev.o 00:03:02.752 CXX test/cpp_headers/blob.o 00:03:02.752 CXX test/cpp_headers/config.o 00:03:02.752 CXX test/cpp_headers/crc16.o 00:03:02.752 CXX test/cpp_headers/conf.o 00:03:02.752 CXX test/cpp_headers/cpuset.o 00:03:02.752 CXX test/cpp_headers/crc32.o 00:03:02.752 CXX test/cpp_headers/crc64.o 00:03:02.752 CXX test/cpp_headers/dif.o 00:03:02.752 CXX test/cpp_headers/dma.o 00:03:02.752 CXX test/cpp_headers/env_dpdk.o 00:03:02.752 CXX test/cpp_headers/endian.o 00:03:02.752 CXX test/cpp_headers/event.o 00:03:02.752 CXX test/cpp_headers/env.o 00:03:02.752 CXX test/cpp_headers/fd.o 00:03:02.752 CXX test/cpp_headers/fd_group.o 00:03:02.752 CXX test/cpp_headers/file.o 00:03:02.752 CXX test/cpp_headers/gpt_spec.o 00:03:02.752 CXX test/cpp_headers/ftl.o 00:03:02.752 CXX test/cpp_headers/histogram_data.o 00:03:02.752 CXX test/cpp_headers/hexlify.o 00:03:02.752 CXX test/cpp_headers/idxd_spec.o 00:03:02.752 CXX test/cpp_headers/idxd.o 00:03:02.752 CXX test/cpp_headers/ioat.o 00:03:02.752 CXX test/cpp_headers/init.o 00:03:02.752 CXX test/cpp_headers/ioat_spec.o 00:03:02.752 CXX test/cpp_headers/iscsi_spec.o 00:03:02.752 CXX test/cpp_headers/json.o 00:03:02.752 CXX test/cpp_headers/jsonrpc.o 00:03:02.752 CXX test/cpp_headers/keyring_module.o 00:03:02.752 CXX test/cpp_headers/likely.o 00:03:02.752 CXX test/cpp_headers/keyring.o 00:03:02.752 CXX test/cpp_headers/memory.o 00:03:02.752 CXX test/cpp_headers/log.o 00:03:02.752 CXX test/cpp_headers/lvol.o 00:03:02.752 CXX test/cpp_headers/mmio.o 00:03:02.752 CXX test/cpp_headers/nbd.o 00:03:02.752 CXX test/cpp_headers/notify.o 00:03:02.752 CXX test/cpp_headers/nvme_intel.o 00:03:02.752 CXX test/cpp_headers/nvme.o 00:03:02.752 CXX test/cpp_headers/nvme_ocssd.o 00:03:02.752 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:02.752 CXX test/cpp_headers/nvme_spec.o 00:03:02.752 CXX test/cpp_headers/nvmf_cmd.o 00:03:02.752 CXX test/cpp_headers/nvme_zns.o 00:03:02.752 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:02.752 CXX test/cpp_headers/nvmf_spec.o 00:03:02.752 CXX test/cpp_headers/nvmf.o 00:03:02.752 CXX test/cpp_headers/opal.o 00:03:02.752 CXX test/cpp_headers/nvmf_transport.o 00:03:02.752 CXX test/cpp_headers/opal_spec.o 00:03:02.752 CXX test/cpp_headers/pipe.o 00:03:02.752 CXX test/cpp_headers/pci_ids.o 00:03:02.752 CXX test/cpp_headers/queue.o 00:03:02.752 CXX test/cpp_headers/reduce.o 00:03:02.752 CC examples/ioat/verify/verify.o 00:03:02.752 CC examples/ioat/perf/perf.o 00:03:02.752 CC test/thread/poller_perf/poller_perf.o 00:03:02.752 CC test/app/stub/stub.o 00:03:02.752 CC test/env/vtophys/vtophys.o 00:03:02.752 CC test/app/histogram_perf/histogram_perf.o 00:03:02.752 CC test/app/jsoncat/jsoncat.o 00:03:02.752 CC app/fio/nvme/fio_plugin.o 00:03:02.752 CXX test/cpp_headers/rpc.o 00:03:02.752 CC test/env/pci/pci_ut.o 00:03:02.752 CC test/env/memory/memory_ut.o 00:03:02.752 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:02.752 CC examples/util/zipf/zipf.o 00:03:02.752 CXX test/cpp_headers/scheduler.o 00:03:02.752 CC test/app/bdev_svc/bdev_svc.o 00:03:03.017 LINK spdk_lspci 00:03:03.017 CC test/dma/test_dma/test_dma.o 00:03:03.017 CC app/fio/bdev/fio_plugin.o 00:03:03.017 LINK interrupt_tgt 00:03:03.331 LINK rpc_client_test 00:03:03.331 CC test/env/mem_callbacks/mem_callbacks.o 00:03:03.331 LINK spdk_nvme_discover 00:03:03.331 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:03.331 LINK jsoncat 00:03:03.331 LINK nvmf_tgt 00:03:03.331 LINK poller_perf 00:03:03.331 CXX test/cpp_headers/scsi.o 00:03:03.331 CXX test/cpp_headers/scsi_spec.o 00:03:03.331 CXX test/cpp_headers/sock.o 00:03:03.331 CXX test/cpp_headers/stdinc.o 00:03:03.331 CXX test/cpp_headers/string.o 00:03:03.331 CXX test/cpp_headers/trace.o 00:03:03.331 CXX test/cpp_headers/thread.o 00:03:03.331 CXX test/cpp_headers/tree.o 00:03:03.331 CXX test/cpp_headers/trace_parser.o 00:03:03.331 LINK spdk_tgt 00:03:03.331 CXX test/cpp_headers/ublk.o 00:03:03.331 LINK spdk_trace_record 00:03:03.331 CXX test/cpp_headers/util.o 00:03:03.331 CXX test/cpp_headers/uuid.o 00:03:03.331 CXX test/cpp_headers/version.o 00:03:03.331 CXX test/cpp_headers/vfio_user_pci.o 00:03:03.331 CXX test/cpp_headers/vfio_user_spec.o 00:03:03.331 CXX test/cpp_headers/vhost.o 00:03:03.331 CXX test/cpp_headers/vmd.o 00:03:03.331 CXX test/cpp_headers/xor.o 00:03:03.331 CXX test/cpp_headers/zipf.o 00:03:03.331 LINK verify 00:03:03.331 LINK histogram_perf 00:03:03.331 LINK iscsi_tgt 00:03:03.331 LINK vtophys 00:03:03.331 LINK zipf 00:03:03.601 LINK env_dpdk_post_init 00:03:03.601 LINK stub 00:03:03.601 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:03.601 LINK spdk_trace 00:03:03.601 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:03.601 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:03.601 LINK ioat_perf 00:03:03.601 LINK bdev_svc 00:03:03.601 LINK spdk_dd 00:03:03.601 LINK pci_ut 00:03:03.601 LINK test_dma 00:03:03.860 LINK nvme_fuzz 00:03:03.860 LINK spdk_nvme 00:03:03.860 LINK spdk_bdev 00:03:03.860 LINK spdk_nvme_perf 00:03:03.860 CC test/event/event_perf/event_perf.o 00:03:03.860 CC app/vhost/vhost.o 00:03:03.860 CC test/event/reactor_perf/reactor_perf.o 00:03:03.860 CC test/event/reactor/reactor.o 00:03:03.860 CC test/event/app_repeat/app_repeat.o 00:03:03.860 LINK vhost_fuzz 00:03:03.860 CC test/event/scheduler/scheduler.o 00:03:03.860 LINK spdk_nvme_identify 00:03:03.860 LINK mem_callbacks 00:03:03.860 LINK spdk_top 00:03:03.860 CC examples/vmd/led/led.o 00:03:03.860 CC examples/vmd/lsvmd/lsvmd.o 00:03:03.860 CC examples/sock/hello_world/hello_sock.o 00:03:03.860 CC examples/idxd/perf/perf.o 00:03:03.860 LINK reactor_perf 00:03:03.860 LINK event_perf 00:03:04.119 CC examples/thread/thread/thread_ex.o 00:03:04.119 LINK reactor 00:03:04.119 LINK app_repeat 00:03:04.119 LINK vhost 00:03:04.119 LINK lsvmd 00:03:04.119 LINK scheduler 00:03:04.119 LINK led 00:03:04.119 CC test/nvme/fused_ordering/fused_ordering.o 00:03:04.119 CC test/nvme/sgl/sgl.o 00:03:04.119 CC test/nvme/boot_partition/boot_partition.o 00:03:04.119 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:04.119 LINK memory_ut 00:03:04.119 CC test/nvme/overhead/overhead.o 00:03:04.119 CC test/nvme/cuse/cuse.o 00:03:04.119 CC test/nvme/simple_copy/simple_copy.o 00:03:04.119 CC test/nvme/connect_stress/connect_stress.o 00:03:04.119 CC test/nvme/reserve/reserve.o 00:03:04.119 CC test/nvme/fdp/fdp.o 00:03:04.119 CC test/nvme/err_injection/err_injection.o 00:03:04.119 CC test/nvme/aer/aer.o 00:03:04.119 CC test/nvme/e2edp/nvme_dp.o 00:03:04.119 CC test/nvme/startup/startup.o 00:03:04.119 CC test/nvme/reset/reset.o 00:03:04.119 CC test/nvme/compliance/nvme_compliance.o 00:03:04.119 CC test/accel/dif/dif.o 00:03:04.119 CC test/blobfs/mkfs/mkfs.o 00:03:04.119 LINK hello_sock 00:03:04.378 LINK thread 00:03:04.378 LINK idxd_perf 00:03:04.378 CC test/lvol/esnap/esnap.o 00:03:04.378 LINK boot_partition 00:03:04.378 LINK fused_ordering 00:03:04.378 LINK err_injection 00:03:04.378 LINK connect_stress 00:03:04.378 LINK doorbell_aers 00:03:04.378 LINK startup 00:03:04.378 LINK reserve 00:03:04.378 LINK simple_copy 00:03:04.378 LINK sgl 00:03:04.378 LINK mkfs 00:03:04.378 LINK aer 00:03:04.378 LINK reset 00:03:04.378 LINK overhead 00:03:04.378 LINK nvme_dp 00:03:04.378 LINK nvme_compliance 00:03:04.378 LINK fdp 00:03:04.638 LINK dif 00:03:04.638 CC examples/nvme/hello_world/hello_world.o 00:03:04.638 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:04.638 CC examples/nvme/reconnect/reconnect.o 00:03:04.638 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:04.638 CC examples/nvme/hotplug/hotplug.o 00:03:04.638 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:04.638 CC examples/nvme/abort/abort.o 00:03:04.638 CC examples/nvme/arbitration/arbitration.o 00:03:04.897 CC examples/accel/perf/accel_perf.o 00:03:04.897 LINK iscsi_fuzz 00:03:04.897 CC examples/blob/cli/blobcli.o 00:03:04.897 CC examples/blob/hello_world/hello_blob.o 00:03:04.897 LINK cmb_copy 00:03:04.897 LINK pmr_persistence 00:03:04.897 LINK hello_world 00:03:04.897 LINK hotplug 00:03:04.897 LINK reconnect 00:03:04.897 LINK arbitration 00:03:04.897 LINK abort 00:03:04.897 LINK nvme_manage 00:03:05.156 LINK hello_blob 00:03:05.156 CC test/bdev/bdevio/bdevio.o 00:03:05.156 LINK accel_perf 00:03:05.156 LINK blobcli 00:03:05.156 LINK cuse 00:03:05.415 LINK bdevio 00:03:05.674 CC examples/bdev/hello_world/hello_bdev.o 00:03:05.674 CC examples/bdev/bdevperf/bdevperf.o 00:03:05.934 LINK hello_bdev 00:03:06.193 LINK bdevperf 00:03:06.763 CC examples/nvmf/nvmf/nvmf.o 00:03:07.022 LINK nvmf 00:03:07.591 LINK esnap 00:03:07.850 00:03:07.850 real 0m34.183s 00:03:07.850 user 5m10.506s 00:03:07.850 sys 2m27.387s 00:03:07.850 10:11:52 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:07.850 10:11:52 make -- common/autotest_common.sh@10 -- $ set +x 00:03:07.850 ************************************ 00:03:07.850 END TEST make 00:03:07.850 ************************************ 00:03:08.109 10:11:52 -- common/autotest_common.sh@1142 -- $ return 0 00:03:08.109 10:11:52 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:08.109 10:11:52 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:08.109 10:11:52 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:08.109 10:11:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:08.109 10:11:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:08.109 10:11:52 -- pm/common@44 -- $ pid=2083629 00:03:08.109 10:11:52 -- pm/common@50 -- $ kill -TERM 2083629 00:03:08.109 10:11:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:08.109 10:11:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:08.109 10:11:52 -- pm/common@44 -- $ pid=2083631 00:03:08.109 10:11:52 -- pm/common@50 -- $ kill -TERM 2083631 00:03:08.109 10:11:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:08.109 10:11:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:08.109 10:11:52 -- pm/common@44 -- $ pid=2083633 00:03:08.109 10:11:52 -- pm/common@50 -- $ kill -TERM 2083633 00:03:08.109 10:11:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:08.109 10:11:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:08.109 10:11:52 -- pm/common@44 -- $ pid=2083656 00:03:08.109 10:11:52 -- pm/common@50 -- $ sudo -E kill -TERM 2083656 00:03:08.109 10:11:52 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:08.109 10:11:52 -- nvmf/common.sh@7 -- # uname -s 00:03:08.109 10:11:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:08.109 10:11:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:08.109 10:11:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:08.109 10:11:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:08.109 10:11:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:08.109 10:11:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:08.109 10:11:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:08.109 10:11:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:08.109 10:11:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:08.109 10:11:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:08.109 10:11:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:08.109 10:11:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:08.109 10:11:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:08.109 10:11:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:08.109 10:11:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:08.109 10:11:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:08.109 10:11:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:08.109 10:11:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:08.110 10:11:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:08.110 10:11:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:08.110 10:11:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:08.110 10:11:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:08.110 10:11:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:08.110 10:11:52 -- paths/export.sh@5 -- # export PATH 00:03:08.110 10:11:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:08.110 10:11:52 -- nvmf/common.sh@47 -- # : 0 00:03:08.110 10:11:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:08.110 10:11:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:08.110 10:11:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:08.110 10:11:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:08.110 10:11:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:08.110 10:11:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:08.110 10:11:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:08.110 10:11:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:08.110 10:11:52 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:08.110 10:11:52 -- spdk/autotest.sh@32 -- # uname -s 00:03:08.110 10:11:53 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:08.110 10:11:53 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:08.110 10:11:53 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:08.110 10:11:53 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:08.110 10:11:53 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:08.110 10:11:53 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:08.110 10:11:53 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:08.110 10:11:53 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:08.110 10:11:53 -- spdk/autotest.sh@48 -- # udevadm_pid=2156943 00:03:08.110 10:11:53 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:08.110 10:11:53 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:08.110 10:11:53 -- pm/common@17 -- # local monitor 00:03:08.110 10:11:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:08.110 10:11:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:08.110 10:11:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:08.110 10:11:53 -- pm/common@21 -- # date +%s 00:03:08.110 10:11:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:08.110 10:11:53 -- pm/common@21 -- # date +%s 00:03:08.110 10:11:53 -- pm/common@25 -- # sleep 1 00:03:08.110 10:11:53 -- pm/common@21 -- # date +%s 00:03:08.110 10:11:53 -- pm/common@21 -- # date +%s 00:03:08.110 10:11:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720944713 00:03:08.110 10:11:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720944713 00:03:08.110 10:11:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720944713 00:03:08.110 10:11:53 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720944713 00:03:08.110 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720944713_collect-vmstat.pm.log 00:03:08.110 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720944713_collect-cpu-load.pm.log 00:03:08.110 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720944713_collect-cpu-temp.pm.log 00:03:08.110 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720944713_collect-bmc-pm.bmc.pm.log 00:03:09.046 10:11:54 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:09.046 10:11:54 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:09.046 10:11:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:09.046 10:11:54 -- common/autotest_common.sh@10 -- # set +x 00:03:09.305 10:11:54 -- spdk/autotest.sh@59 -- # create_test_list 00:03:09.305 10:11:54 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:09.305 10:11:54 -- common/autotest_common.sh@10 -- # set +x 00:03:09.305 10:11:54 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:09.305 10:11:54 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:09.305 10:11:54 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:09.305 10:11:54 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:09.305 10:11:54 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:09.305 10:11:54 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:09.305 10:11:54 -- common/autotest_common.sh@1455 -- # uname 00:03:09.305 10:11:54 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:09.305 10:11:54 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:09.305 10:11:54 -- common/autotest_common.sh@1475 -- # uname 00:03:09.305 10:11:54 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:09.305 10:11:54 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:09.305 10:11:54 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:09.305 10:11:54 -- spdk/autotest.sh@72 -- # hash lcov 00:03:09.305 10:11:54 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:09.305 10:11:54 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:09.305 --rc lcov_branch_coverage=1 00:03:09.305 --rc lcov_function_coverage=1 00:03:09.305 --rc genhtml_branch_coverage=1 00:03:09.305 --rc genhtml_function_coverage=1 00:03:09.305 --rc genhtml_legend=1 00:03:09.305 --rc geninfo_all_blocks=1 00:03:09.305 ' 00:03:09.305 10:11:54 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:09.305 --rc lcov_branch_coverage=1 00:03:09.305 --rc lcov_function_coverage=1 00:03:09.305 --rc genhtml_branch_coverage=1 00:03:09.305 --rc genhtml_function_coverage=1 00:03:09.305 --rc genhtml_legend=1 00:03:09.305 --rc geninfo_all_blocks=1 00:03:09.305 ' 00:03:09.305 10:11:54 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:09.305 --rc lcov_branch_coverage=1 00:03:09.305 --rc lcov_function_coverage=1 00:03:09.305 --rc genhtml_branch_coverage=1 00:03:09.305 --rc genhtml_function_coverage=1 00:03:09.305 --rc genhtml_legend=1 00:03:09.305 --rc geninfo_all_blocks=1 00:03:09.305 --no-external' 00:03:09.305 10:11:54 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:09.305 --rc lcov_branch_coverage=1 00:03:09.305 --rc lcov_function_coverage=1 00:03:09.305 --rc genhtml_branch_coverage=1 00:03:09.305 --rc genhtml_function_coverage=1 00:03:09.305 --rc genhtml_legend=1 00:03:09.305 --rc geninfo_all_blocks=1 00:03:09.305 --no-external' 00:03:09.305 10:11:54 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:09.305 lcov: LCOV version 1.14 00:03:09.305 10:11:54 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:13.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:13.495 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:13.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:13.495 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:13.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:13.495 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:13.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:13.495 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:13.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:13.495 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:13.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:13.495 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:13.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:13.495 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:13.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:13.495 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:13.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:13.495 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:13.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:13.495 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:13.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:13.495 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:13.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:13.495 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:13.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:13.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:13.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:13.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:13.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:13.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:13.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:13.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:13.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:13.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:13.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:13.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:13.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:13.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:13.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:13.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:13.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:13.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:13.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:13.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:13.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:13.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:13.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:13.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:13.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:13.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:13.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:13.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:13.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:13.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:13.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:13.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:13.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:13.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:13.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:13.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:13.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:13.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:13.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:13.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:13.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:13.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:13.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:13.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:28.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:28.378 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:33.692 10:12:18 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:33.692 10:12:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:33.692 10:12:18 -- common/autotest_common.sh@10 -- # set +x 00:03:33.692 10:12:18 -- spdk/autotest.sh@91 -- # rm -f 00:03:33.692 10:12:18 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:36.983 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:36.983 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:36.983 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:36.983 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:36.983 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:36.983 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:36.983 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:36.983 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:36.983 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:36.983 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:36.983 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:36.983 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:36.983 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:36.983 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:36.983 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:36.983 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:36.983 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:36.983 10:12:21 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:36.983 10:12:21 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:36.983 10:12:21 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:36.983 10:12:21 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:36.983 10:12:21 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:36.983 10:12:21 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:36.983 10:12:21 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:36.983 10:12:21 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:36.983 10:12:21 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:36.983 10:12:21 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:36.983 10:12:21 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:36.983 10:12:21 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:36.983 10:12:21 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:36.983 10:12:21 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:36.983 10:12:21 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:36.983 No valid GPT data, bailing 00:03:36.983 10:12:21 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:36.983 10:12:21 -- scripts/common.sh@391 -- # pt= 00:03:36.983 10:12:21 -- scripts/common.sh@392 -- # return 1 00:03:36.983 10:12:21 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:36.983 1+0 records in 00:03:36.983 1+0 records out 00:03:36.983 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00436101 s, 240 MB/s 00:03:36.983 10:12:21 -- spdk/autotest.sh@118 -- # sync 00:03:36.983 10:12:21 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:36.983 10:12:21 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:36.983 10:12:21 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:42.258 10:12:26 -- spdk/autotest.sh@124 -- # uname -s 00:03:42.258 10:12:26 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:42.258 10:12:26 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:42.258 10:12:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.258 10:12:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.258 10:12:26 -- common/autotest_common.sh@10 -- # set +x 00:03:42.258 ************************************ 00:03:42.258 START TEST setup.sh 00:03:42.258 ************************************ 00:03:42.258 10:12:27 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:42.258 * Looking for test storage... 00:03:42.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:42.258 10:12:27 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:42.258 10:12:27 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:42.258 10:12:27 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:42.258 10:12:27 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.258 10:12:27 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.258 10:12:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:42.258 ************************************ 00:03:42.258 START TEST acl 00:03:42.258 ************************************ 00:03:42.258 10:12:27 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:42.258 * Looking for test storage... 00:03:42.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:42.516 10:12:27 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:42.517 10:12:27 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:42.517 10:12:27 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:42.517 10:12:27 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:42.517 10:12:27 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:42.517 10:12:27 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:42.517 10:12:27 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:42.517 10:12:27 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:42.517 10:12:27 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:42.517 10:12:27 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:42.517 10:12:27 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:42.517 10:12:27 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:42.517 10:12:27 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:42.517 10:12:27 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:42.517 10:12:27 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:42.517 10:12:27 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:45.803 10:12:30 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:45.803 10:12:30 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:45.803 10:12:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:45.803 10:12:30 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:45.803 10:12:30 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.803 10:12:30 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:48.334 Hugepages 00:03:48.334 node hugesize free / total 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.334 00:03:48.334 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:48.334 10:12:33 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:48.334 10:12:33 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.334 10:12:33 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.334 10:12:33 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:48.593 ************************************ 00:03:48.593 START TEST denied 00:03:48.593 ************************************ 00:03:48.593 10:12:33 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:48.593 10:12:33 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:03:48.593 10:12:33 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:48.593 10:12:33 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:03:48.593 10:12:33 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.593 10:12:33 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:51.883 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:03:51.883 10:12:36 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:51.883 10:12:36 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:51.883 10:12:36 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:51.883 10:12:36 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:51.883 10:12:36 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:51.883 10:12:36 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:51.883 10:12:36 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:51.883 10:12:36 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:51.883 10:12:36 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:51.883 10:12:36 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:56.078 00:03:56.078 real 0m7.153s 00:03:56.078 user 0m2.318s 00:03:56.078 sys 0m4.116s 00:03:56.078 10:12:40 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.078 10:12:40 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:56.078 ************************************ 00:03:56.078 END TEST denied 00:03:56.078 ************************************ 00:03:56.078 10:12:40 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:56.078 10:12:40 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:56.078 10:12:40 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.078 10:12:40 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.078 10:12:40 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:56.078 ************************************ 00:03:56.078 START TEST allowed 00:03:56.078 ************************************ 00:03:56.078 10:12:40 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:56.078 10:12:40 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:56.078 10:12:40 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:56.078 10:12:40 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:56.078 10:12:40 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.078 10:12:40 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:00.315 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:00.315 10:12:44 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:00.315 10:12:44 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:00.315 10:12:44 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:00.315 10:12:44 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:00.315 10:12:44 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:02.853 00:04:02.853 real 0m7.037s 00:04:02.853 user 0m2.183s 00:04:02.853 sys 0m4.021s 00:04:02.853 10:12:47 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.853 10:12:47 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:02.853 ************************************ 00:04:02.853 END TEST allowed 00:04:02.853 ************************************ 00:04:02.853 10:12:47 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:02.853 00:04:02.853 real 0m20.488s 00:04:02.853 user 0m6.877s 00:04:02.853 sys 0m12.282s 00:04:02.853 10:12:47 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.853 10:12:47 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:02.853 ************************************ 00:04:02.853 END TEST acl 00:04:02.853 ************************************ 00:04:02.853 10:12:47 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:02.853 10:12:47 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:02.853 10:12:47 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.853 10:12:47 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.853 10:12:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:02.853 ************************************ 00:04:02.853 START TEST hugepages 00:04:02.853 ************************************ 00:04:02.853 10:12:47 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:02.853 * Looking for test storage... 00:04:02.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 171837116 kB' 'MemAvailable: 174712104 kB' 'Buffers: 4928 kB' 'Cached: 11746772 kB' 'SwapCached: 0 kB' 'Active: 8745040 kB' 'Inactive: 3508388 kB' 'Active(anon): 8353032 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504936 kB' 'Mapped: 256480 kB' 'Shmem: 7851304 kB' 'KReclaimable: 238084 kB' 'Slab: 781452 kB' 'SReclaimable: 238084 kB' 'SUnreclaim: 543368 kB' 'KernelStack: 20400 kB' 'PageTables: 8948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982028 kB' 'Committed_AS: 9850536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314888 kB' 'VmallocChunk: 0 kB' 'Percpu: 71040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2552788 kB' 'DirectMap2M: 10758144 kB' 'DirectMap1G: 188743680 kB' 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.853 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.854 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.854 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.854 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.854 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.854 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.854 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.854 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.854 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.854 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.854 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.854 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.854 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.854 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.854 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.854 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.854 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.854 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.854 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.854 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.854 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.113 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:03.114 10:12:47 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:03.114 10:12:47 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.114 10:12:47 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.114 10:12:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:03.114 ************************************ 00:04:03.114 START TEST default_setup 00:04:03.114 ************************************ 00:04:03.114 10:12:47 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:03.114 10:12:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:03.114 10:12:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:03.114 10:12:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:03.114 10:12:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:03.114 10:12:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:03.114 10:12:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:03.114 10:12:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.114 10:12:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:03.114 10:12:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:03.114 10:12:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:03.114 10:12:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.114 10:12:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:03.114 10:12:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:03.114 10:12:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.114 10:12:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.114 10:12:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:03.114 10:12:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:03.114 10:12:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:03.114 10:12:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:03.114 10:12:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:03.114 10:12:47 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.114 10:12:47 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:06.404 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:06.404 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:06.404 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:06.404 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:06.404 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:06.404 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:06.404 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:06.404 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:06.404 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:06.404 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:06.404 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:06.404 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:06.404 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:06.404 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:06.404 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:06.404 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:06.663 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173967760 kB' 'MemAvailable: 176842652 kB' 'Buffers: 4928 kB' 'Cached: 11755080 kB' 'SwapCached: 0 kB' 'Active: 8772460 kB' 'Inactive: 3508388 kB' 'Active(anon): 8380452 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523760 kB' 'Mapped: 256572 kB' 'Shmem: 7859612 kB' 'KReclaimable: 237892 kB' 'Slab: 780412 kB' 'SReclaimable: 237892 kB' 'SUnreclaim: 542520 kB' 'KernelStack: 20608 kB' 'PageTables: 9320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9879080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314904 kB' 'VmallocChunk: 0 kB' 'Percpu: 71040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2552788 kB' 'DirectMap2M: 10758144 kB' 'DirectMap1G: 188743680 kB' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.928 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173969328 kB' 'MemAvailable: 176844220 kB' 'Buffers: 4928 kB' 'Cached: 11755080 kB' 'SwapCached: 0 kB' 'Active: 8771696 kB' 'Inactive: 3508388 kB' 'Active(anon): 8379688 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523492 kB' 'Mapped: 256488 kB' 'Shmem: 7859612 kB' 'KReclaimable: 237892 kB' 'Slab: 780412 kB' 'SReclaimable: 237892 kB' 'SUnreclaim: 542520 kB' 'KernelStack: 20592 kB' 'PageTables: 9564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9879096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314888 kB' 'VmallocChunk: 0 kB' 'Percpu: 71040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2552788 kB' 'DirectMap2M: 10758144 kB' 'DirectMap1G: 188743680 kB' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.929 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173968960 kB' 'MemAvailable: 176843852 kB' 'Buffers: 4928 kB' 'Cached: 11755100 kB' 'SwapCached: 0 kB' 'Active: 8772076 kB' 'Inactive: 3508388 kB' 'Active(anon): 8380068 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523812 kB' 'Mapped: 256488 kB' 'Shmem: 7859632 kB' 'KReclaimable: 237892 kB' 'Slab: 780412 kB' 'SReclaimable: 237892 kB' 'SUnreclaim: 542520 kB' 'KernelStack: 20688 kB' 'PageTables: 9276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9879120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314952 kB' 'VmallocChunk: 0 kB' 'Percpu: 71040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2552788 kB' 'DirectMap2M: 10758144 kB' 'DirectMap1G: 188743680 kB' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.930 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:06.931 nr_hugepages=1024 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.931 resv_hugepages=0 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.931 surplus_hugepages=0 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.931 anon_hugepages=0 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173969756 kB' 'MemAvailable: 176844648 kB' 'Buffers: 4928 kB' 'Cached: 11755128 kB' 'SwapCached: 0 kB' 'Active: 8771740 kB' 'Inactive: 3508388 kB' 'Active(anon): 8379732 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523516 kB' 'Mapped: 256488 kB' 'Shmem: 7859660 kB' 'KReclaimable: 237892 kB' 'Slab: 780412 kB' 'SReclaimable: 237892 kB' 'SUnreclaim: 542520 kB' 'KernelStack: 20544 kB' 'PageTables: 9188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9879140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314904 kB' 'VmallocChunk: 0 kB' 'Percpu: 71040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2552788 kB' 'DirectMap2M: 10758144 kB' 'DirectMap1G: 188743680 kB' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.931 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 84699184 kB' 'MemUsed: 12963500 kB' 'SwapCached: 0 kB' 'Active: 6388348 kB' 'Inactive: 3336368 kB' 'Active(anon): 6230808 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3336368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9537956 kB' 'Mapped: 71844 kB' 'AnonPages: 189968 kB' 'Shmem: 6044048 kB' 'KernelStack: 11368 kB' 'PageTables: 3644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139680 kB' 'Slab: 388964 kB' 'SReclaimable: 139680 kB' 'SUnreclaim: 249284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.932 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:06.933 node0=1024 expecting 1024 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:06.933 00:04:06.933 real 0m3.987s 00:04:06.933 user 0m1.327s 00:04:06.933 sys 0m1.952s 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.933 10:12:51 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:06.933 ************************************ 00:04:06.933 END TEST default_setup 00:04:06.933 ************************************ 00:04:07.192 10:12:51 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:07.192 10:12:51 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:07.192 10:12:51 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.192 10:12:51 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.192 10:12:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:07.192 ************************************ 00:04:07.192 START TEST per_node_1G_alloc 00:04:07.192 ************************************ 00:04:07.192 10:12:51 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:07.192 10:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:07.192 10:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:07.192 10:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:07.192 10:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:07.192 10:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:07.192 10:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:07.192 10:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:07.192 10:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:07.192 10:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:07.192 10:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:07.192 10:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:07.192 10:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:07.192 10:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:07.192 10:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:07.192 10:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:07.192 10:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:07.192 10:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:07.192 10:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:07.192 10:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:07.192 10:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:07.192 10:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:07.192 10:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:07.192 10:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:07.192 10:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:07.192 10:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:07.192 10:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.192 10:12:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:09.726 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:09.726 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:09.726 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:09.726 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:09.726 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:09.726 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:09.726 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:09.726 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:09.726 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:09.726 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:09.726 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:09.726 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:09.726 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:09.726 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:09.726 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:09.726 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:09.726 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173953272 kB' 'MemAvailable: 176828164 kB' 'Buffers: 4928 kB' 'Cached: 11755208 kB' 'SwapCached: 0 kB' 'Active: 8772928 kB' 'Inactive: 3508388 kB' 'Active(anon): 8380920 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523908 kB' 'Mapped: 256584 kB' 'Shmem: 7859740 kB' 'KReclaimable: 237892 kB' 'Slab: 780648 kB' 'SReclaimable: 237892 kB' 'SUnreclaim: 542756 kB' 'KernelStack: 20432 kB' 'PageTables: 8640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9876612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314936 kB' 'VmallocChunk: 0 kB' 'Percpu: 71040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2552788 kB' 'DirectMap2M: 10758144 kB' 'DirectMap1G: 188743680 kB' 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.991 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.992 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173954640 kB' 'MemAvailable: 176829532 kB' 'Buffers: 4928 kB' 'Cached: 11755220 kB' 'SwapCached: 0 kB' 'Active: 8771772 kB' 'Inactive: 3508388 kB' 'Active(anon): 8379764 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523208 kB' 'Mapped: 256480 kB' 'Shmem: 7859752 kB' 'KReclaimable: 237892 kB' 'Slab: 780716 kB' 'SReclaimable: 237892 kB' 'SUnreclaim: 542824 kB' 'KernelStack: 20448 kB' 'PageTables: 8928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9877004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314920 kB' 'VmallocChunk: 0 kB' 'Percpu: 71040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2552788 kB' 'DirectMap2M: 10758144 kB' 'DirectMap1G: 188743680 kB' 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.993 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173954340 kB' 'MemAvailable: 176829232 kB' 'Buffers: 4928 kB' 'Cached: 11755240 kB' 'SwapCached: 0 kB' 'Active: 8772008 kB' 'Inactive: 3508388 kB' 'Active(anon): 8380000 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523460 kB' 'Mapped: 256480 kB' 'Shmem: 7859772 kB' 'KReclaimable: 237892 kB' 'Slab: 780716 kB' 'SReclaimable: 237892 kB' 'SUnreclaim: 542824 kB' 'KernelStack: 20464 kB' 'PageTables: 8964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9877024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314936 kB' 'VmallocChunk: 0 kB' 'Percpu: 71040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2552788 kB' 'DirectMap2M: 10758144 kB' 'DirectMap1G: 188743680 kB' 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.994 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.995 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.996 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:09.997 nr_hugepages=1024 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:09.997 resv_hugepages=0 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:09.997 surplus_hugepages=0 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:09.997 anon_hugepages=0 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173955272 kB' 'MemAvailable: 176830164 kB' 'Buffers: 4928 kB' 'Cached: 11755244 kB' 'SwapCached: 0 kB' 'Active: 8771664 kB' 'Inactive: 3508388 kB' 'Active(anon): 8379656 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523112 kB' 'Mapped: 256480 kB' 'Shmem: 7859776 kB' 'KReclaimable: 237892 kB' 'Slab: 780716 kB' 'SReclaimable: 237892 kB' 'SUnreclaim: 542824 kB' 'KernelStack: 20448 kB' 'PageTables: 8908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9877048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314936 kB' 'VmallocChunk: 0 kB' 'Percpu: 71040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2552788 kB' 'DirectMap2M: 10758144 kB' 'DirectMap1G: 188743680 kB' 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.997 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.998 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85730596 kB' 'MemUsed: 11932088 kB' 'SwapCached: 0 kB' 'Active: 6388428 kB' 'Inactive: 3336368 kB' 'Active(anon): 6230888 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3336368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9538028 kB' 'Mapped: 71824 kB' 'AnonPages: 189896 kB' 'Shmem: 6044120 kB' 'KernelStack: 11368 kB' 'PageTables: 3644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139680 kB' 'Slab: 389100 kB' 'SReclaimable: 139680 kB' 'SUnreclaim: 249420 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.999 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.000 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88224820 kB' 'MemUsed: 5493648 kB' 'SwapCached: 0 kB' 'Active: 2383668 kB' 'Inactive: 172020 kB' 'Active(anon): 2149200 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 172020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2222208 kB' 'Mapped: 184656 kB' 'AnonPages: 333580 kB' 'Shmem: 1815720 kB' 'KernelStack: 9096 kB' 'PageTables: 5320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98212 kB' 'Slab: 391616 kB' 'SReclaimable: 98212 kB' 'SUnreclaim: 293404 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.001 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.261 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.262 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:10.263 node0=512 expecting 512 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:10.263 node1=512 expecting 512 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:10.263 00:04:10.263 real 0m3.041s 00:04:10.263 user 0m1.256s 00:04:10.263 sys 0m1.853s 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.263 10:12:54 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:10.263 ************************************ 00:04:10.263 END TEST per_node_1G_alloc 00:04:10.263 ************************************ 00:04:10.263 10:12:55 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:10.263 10:12:55 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:10.263 10:12:55 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.263 10:12:55 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.263 10:12:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:10.263 ************************************ 00:04:10.263 START TEST even_2G_alloc 00:04:10.263 ************************************ 00:04:10.263 10:12:55 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:10.263 10:12:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:10.263 10:12:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:10.263 10:12:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:10.263 10:12:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:10.263 10:12:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:10.263 10:12:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:10.263 10:12:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:10.263 10:12:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.263 10:12:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:10.263 10:12:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:10.263 10:12:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.263 10:12:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.263 10:12:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:10.263 10:12:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:10.263 10:12:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.263 10:12:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:10.263 10:12:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:10.263 10:12:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:10.263 10:12:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.263 10:12:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:10.263 10:12:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:10.263 10:12:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:10.263 10:12:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.263 10:12:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:10.263 10:12:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:10.263 10:12:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:10.263 10:12:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.263 10:12:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:12.802 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:12.802 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:12.802 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:12.802 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:12.802 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:12.802 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:12.802 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:12.802 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:12.802 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:12.802 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:12.802 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:12.802 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:13.066 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:13.066 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:13.066 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:13.066 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:13.066 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173968348 kB' 'MemAvailable: 176843240 kB' 'Buffers: 4928 kB' 'Cached: 11755368 kB' 'SwapCached: 0 kB' 'Active: 8770668 kB' 'Inactive: 3508388 kB' 'Active(anon): 8378660 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521496 kB' 'Mapped: 255524 kB' 'Shmem: 7859900 kB' 'KReclaimable: 237892 kB' 'Slab: 780280 kB' 'SReclaimable: 237892 kB' 'SUnreclaim: 542388 kB' 'KernelStack: 20352 kB' 'PageTables: 8572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9866088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314904 kB' 'VmallocChunk: 0 kB' 'Percpu: 71040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2552788 kB' 'DirectMap2M: 10758144 kB' 'DirectMap1G: 188743680 kB' 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.066 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.067 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173968864 kB' 'MemAvailable: 176843756 kB' 'Buffers: 4928 kB' 'Cached: 11755372 kB' 'SwapCached: 0 kB' 'Active: 8770656 kB' 'Inactive: 3508388 kB' 'Active(anon): 8378648 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521996 kB' 'Mapped: 255436 kB' 'Shmem: 7859904 kB' 'KReclaimable: 237892 kB' 'Slab: 780252 kB' 'SReclaimable: 237892 kB' 'SUnreclaim: 542360 kB' 'KernelStack: 20432 kB' 'PageTables: 8808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9866108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314888 kB' 'VmallocChunk: 0 kB' 'Percpu: 71040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2552788 kB' 'DirectMap2M: 10758144 kB' 'DirectMap1G: 188743680 kB' 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.068 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.069 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173969476 kB' 'MemAvailable: 176844368 kB' 'Buffers: 4928 kB' 'Cached: 11755388 kB' 'SwapCached: 0 kB' 'Active: 8770376 kB' 'Inactive: 3508388 kB' 'Active(anon): 8378368 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521692 kB' 'Mapped: 255436 kB' 'Shmem: 7859920 kB' 'KReclaimable: 237892 kB' 'Slab: 780252 kB' 'SReclaimable: 237892 kB' 'SUnreclaim: 542360 kB' 'KernelStack: 20432 kB' 'PageTables: 8804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9866128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314888 kB' 'VmallocChunk: 0 kB' 'Percpu: 71040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2552788 kB' 'DirectMap2M: 10758144 kB' 'DirectMap1G: 188743680 kB' 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.070 10:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.070 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.070 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.070 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.070 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.070 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.070 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.070 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.070 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.070 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.070 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.070 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.070 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.070 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.070 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.070 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.070 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.070 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.070 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.070 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.070 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.070 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.070 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.070 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.070 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.070 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.071 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:13.072 nr_hugepages=1024 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.072 resv_hugepages=0 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.072 surplus_hugepages=0 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.072 anon_hugepages=0 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173969224 kB' 'MemAvailable: 176844116 kB' 'Buffers: 4928 kB' 'Cached: 11755428 kB' 'SwapCached: 0 kB' 'Active: 8770364 kB' 'Inactive: 3508388 kB' 'Active(anon): 8378356 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521616 kB' 'Mapped: 255436 kB' 'Shmem: 7859960 kB' 'KReclaimable: 237892 kB' 'Slab: 780252 kB' 'SReclaimable: 237892 kB' 'SUnreclaim: 542360 kB' 'KernelStack: 20416 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9866148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314888 kB' 'VmallocChunk: 0 kB' 'Percpu: 71040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2552788 kB' 'DirectMap2M: 10758144 kB' 'DirectMap1G: 188743680 kB' 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.072 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.073 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.074 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.336 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85738308 kB' 'MemUsed: 11924376 kB' 'SwapCached: 0 kB' 'Active: 6388576 kB' 'Inactive: 3336368 kB' 'Active(anon): 6231036 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3336368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9538048 kB' 'Mapped: 71520 kB' 'AnonPages: 189992 kB' 'Shmem: 6044140 kB' 'KernelStack: 11368 kB' 'PageTables: 3648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139680 kB' 'Slab: 388848 kB' 'SReclaimable: 139680 kB' 'SUnreclaim: 249168 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.337 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88231964 kB' 'MemUsed: 5486504 kB' 'SwapCached: 0 kB' 'Active: 2381868 kB' 'Inactive: 172020 kB' 'Active(anon): 2147400 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 172020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2222332 kB' 'Mapped: 183916 kB' 'AnonPages: 331692 kB' 'Shmem: 1815844 kB' 'KernelStack: 9064 kB' 'PageTables: 5156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98212 kB' 'Slab: 391404 kB' 'SReclaimable: 98212 kB' 'SUnreclaim: 293192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.338 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.339 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:13.340 node0=512 expecting 512 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:13.340 node1=512 expecting 512 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:13.340 00:04:13.340 real 0m3.047s 00:04:13.340 user 0m1.240s 00:04:13.340 sys 0m1.875s 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.340 10:12:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:13.340 ************************************ 00:04:13.340 END TEST even_2G_alloc 00:04:13.340 ************************************ 00:04:13.340 10:12:58 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:13.340 10:12:58 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:13.340 10:12:58 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.340 10:12:58 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.340 10:12:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:13.340 ************************************ 00:04:13.340 START TEST odd_alloc 00:04:13.340 ************************************ 00:04:13.340 10:12:58 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:13.340 10:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:13.340 10:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:13.340 10:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:13.340 10:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.340 10:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:13.340 10:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:13.340 10:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:13.340 10:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.340 10:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:13.340 10:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:13.340 10:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.340 10:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.340 10:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:13.340 10:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:13.340 10:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.340 10:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:13.340 10:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:13.340 10:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:13.340 10:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.340 10:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:13.340 10:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:13.340 10:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:13.340 10:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.340 10:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:13.340 10:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:13.340 10:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:13.340 10:12:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.340 10:12:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:15.906 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:15.906 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:15.906 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:15.906 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:15.906 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:15.906 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:15.906 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:15.906 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:15.906 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:15.906 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:16.180 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:16.180 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:16.180 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:16.180 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:16.180 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:16.180 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:16.180 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173997792 kB' 'MemAvailable: 176872668 kB' 'Buffers: 4928 kB' 'Cached: 11755528 kB' 'SwapCached: 0 kB' 'Active: 8769856 kB' 'Inactive: 3508388 kB' 'Active(anon): 8377848 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520972 kB' 'Mapped: 255464 kB' 'Shmem: 7860060 kB' 'KReclaimable: 237860 kB' 'Slab: 780124 kB' 'SReclaimable: 237860 kB' 'SUnreclaim: 542264 kB' 'KernelStack: 20368 kB' 'PageTables: 8572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 9866804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314936 kB' 'VmallocChunk: 0 kB' 'Percpu: 71040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2552788 kB' 'DirectMap2M: 10758144 kB' 'DirectMap1G: 188743680 kB' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.180 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174001668 kB' 'MemAvailable: 176876544 kB' 'Buffers: 4928 kB' 'Cached: 11755532 kB' 'SwapCached: 0 kB' 'Active: 8770708 kB' 'Inactive: 3508388 kB' 'Active(anon): 8378700 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521852 kB' 'Mapped: 255956 kB' 'Shmem: 7860064 kB' 'KReclaimable: 237860 kB' 'Slab: 780144 kB' 'SReclaimable: 237860 kB' 'SUnreclaim: 542284 kB' 'KernelStack: 20400 kB' 'PageTables: 8700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 9868572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314872 kB' 'VmallocChunk: 0 kB' 'Percpu: 71040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2552788 kB' 'DirectMap2M: 10758144 kB' 'DirectMap1G: 188743680 kB' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.181 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173998476 kB' 'MemAvailable: 176873352 kB' 'Buffers: 4928 kB' 'Cached: 11755548 kB' 'SwapCached: 0 kB' 'Active: 8775252 kB' 'Inactive: 3508388 kB' 'Active(anon): 8383244 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526332 kB' 'Mapped: 255956 kB' 'Shmem: 7860080 kB' 'KReclaimable: 237860 kB' 'Slab: 780144 kB' 'SReclaimable: 237860 kB' 'SUnreclaim: 542284 kB' 'KernelStack: 20400 kB' 'PageTables: 8692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 9872960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314876 kB' 'VmallocChunk: 0 kB' 'Percpu: 71040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2552788 kB' 'DirectMap2M: 10758144 kB' 'DirectMap1G: 188743680 kB' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.182 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.183 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:16.184 nr_hugepages=1025 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:16.184 resv_hugepages=0 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:16.184 surplus_hugepages=0 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:16.184 anon_hugepages=0 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173998476 kB' 'MemAvailable: 176873352 kB' 'Buffers: 4928 kB' 'Cached: 11755552 kB' 'SwapCached: 0 kB' 'Active: 8770128 kB' 'Inactive: 3508388 kB' 'Active(anon): 8378120 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521244 kB' 'Mapped: 255796 kB' 'Shmem: 7860084 kB' 'KReclaimable: 237860 kB' 'Slab: 780152 kB' 'SReclaimable: 237860 kB' 'SUnreclaim: 542292 kB' 'KernelStack: 20416 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 9866864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314888 kB' 'VmallocChunk: 0 kB' 'Percpu: 71040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2552788 kB' 'DirectMap2M: 10758144 kB' 'DirectMap1G: 188743680 kB' 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.184 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.447 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85773232 kB' 'MemUsed: 11889452 kB' 'SwapCached: 0 kB' 'Active: 6388548 kB' 'Inactive: 3336368 kB' 'Active(anon): 6231008 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3336368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9538088 kB' 'Mapped: 71520 kB' 'AnonPages: 189920 kB' 'Shmem: 6044180 kB' 'KernelStack: 11352 kB' 'PageTables: 3596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139648 kB' 'Slab: 388756 kB' 'SReclaimable: 139648 kB' 'SUnreclaim: 249108 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.448 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88223984 kB' 'MemUsed: 5494484 kB' 'SwapCached: 0 kB' 'Active: 2381508 kB' 'Inactive: 172020 kB' 'Active(anon): 2147040 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 172020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2222432 kB' 'Mapped: 183932 kB' 'AnonPages: 331232 kB' 'Shmem: 1815944 kB' 'KernelStack: 9064 kB' 'PageTables: 5152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98212 kB' 'Slab: 391396 kB' 'SReclaimable: 98212 kB' 'SUnreclaim: 293184 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.449 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:16.450 node0=512 expecting 513 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:16.450 node1=513 expecting 512 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:16.450 00:04:16.450 real 0m3.062s 00:04:16.450 user 0m1.277s 00:04:16.450 sys 0m1.856s 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.450 10:13:01 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:16.450 ************************************ 00:04:16.450 END TEST odd_alloc 00:04:16.450 ************************************ 00:04:16.450 10:13:01 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:16.450 10:13:01 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:16.450 10:13:01 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.450 10:13:01 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.450 10:13:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:16.450 ************************************ 00:04:16.450 START TEST custom_alloc 00:04:16.450 ************************************ 00:04:16.450 10:13:01 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:16.450 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:16.450 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:16.450 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:16.450 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:16.450 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:16.450 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:16.450 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:16.450 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:16.450 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:16.450 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:16.450 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:16.450 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:16.450 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:16.450 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:16.450 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:16.450 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:16.450 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:16.450 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.451 10:13:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:19.033 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:19.033 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:19.033 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:19.033 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:19.033 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:19.033 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:19.033 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:19.033 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:19.294 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:19.294 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:19.294 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:19.294 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:19.294 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:19.294 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:19.294 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:19.294 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:19.294 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:19.294 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:19.294 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:19.294 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:19.294 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:19.294 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:19.294 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:19.294 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:19.294 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:19.294 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:19.294 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:19.294 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:19.294 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:19.294 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:19.294 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.294 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.294 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.294 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.294 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 172956456 kB' 'MemAvailable: 175831332 kB' 'Buffers: 4928 kB' 'Cached: 11755684 kB' 'SwapCached: 0 kB' 'Active: 8770424 kB' 'Inactive: 3508388 kB' 'Active(anon): 8378416 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521464 kB' 'Mapped: 255492 kB' 'Shmem: 7860216 kB' 'KReclaimable: 237860 kB' 'Slab: 779468 kB' 'SReclaimable: 237860 kB' 'SUnreclaim: 541608 kB' 'KernelStack: 20400 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 9867484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314888 kB' 'VmallocChunk: 0 kB' 'Percpu: 71040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2552788 kB' 'DirectMap2M: 10758144 kB' 'DirectMap1G: 188743680 kB' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.295 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 172958728 kB' 'MemAvailable: 175833604 kB' 'Buffers: 4928 kB' 'Cached: 11755688 kB' 'SwapCached: 0 kB' 'Active: 8770208 kB' 'Inactive: 3508388 kB' 'Active(anon): 8378200 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521244 kB' 'Mapped: 255472 kB' 'Shmem: 7860220 kB' 'KReclaimable: 237860 kB' 'Slab: 779540 kB' 'SReclaimable: 237860 kB' 'SUnreclaim: 541680 kB' 'KernelStack: 20416 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 9867500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314856 kB' 'VmallocChunk: 0 kB' 'Percpu: 71040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2552788 kB' 'DirectMap2M: 10758144 kB' 'DirectMap1G: 188743680 kB' 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.296 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.297 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 172958728 kB' 'MemAvailable: 175833604 kB' 'Buffers: 4928 kB' 'Cached: 11755704 kB' 'SwapCached: 0 kB' 'Active: 8770232 kB' 'Inactive: 3508388 kB' 'Active(anon): 8378224 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521240 kB' 'Mapped: 255472 kB' 'Shmem: 7860236 kB' 'KReclaimable: 237860 kB' 'Slab: 779540 kB' 'SReclaimable: 237860 kB' 'SUnreclaim: 541680 kB' 'KernelStack: 20416 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 9867524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314872 kB' 'VmallocChunk: 0 kB' 'Percpu: 71040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2552788 kB' 'DirectMap2M: 10758144 kB' 'DirectMap1G: 188743680 kB' 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.298 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.299 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:19.562 nr_hugepages=1536 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:19.562 resv_hugepages=0 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:19.562 surplus_hugepages=0 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:19.562 anon_hugepages=0 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 172959188 kB' 'MemAvailable: 175834064 kB' 'Buffers: 4928 kB' 'Cached: 11755720 kB' 'SwapCached: 0 kB' 'Active: 8770748 kB' 'Inactive: 3508388 kB' 'Active(anon): 8378740 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521756 kB' 'Mapped: 255488 kB' 'Shmem: 7860252 kB' 'KReclaimable: 237860 kB' 'Slab: 779540 kB' 'SReclaimable: 237860 kB' 'SUnreclaim: 541680 kB' 'KernelStack: 20400 kB' 'PageTables: 8704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 9870168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314824 kB' 'VmallocChunk: 0 kB' 'Percpu: 71040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2552788 kB' 'DirectMap2M: 10758144 kB' 'DirectMap1G: 188743680 kB' 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.562 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.563 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85766912 kB' 'MemUsed: 11895772 kB' 'SwapCached: 0 kB' 'Active: 6389136 kB' 'Inactive: 3336368 kB' 'Active(anon): 6231596 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3336368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9538128 kB' 'Mapped: 71532 kB' 'AnonPages: 190516 kB' 'Shmem: 6044220 kB' 'KernelStack: 11464 kB' 'PageTables: 3584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139648 kB' 'Slab: 388908 kB' 'SReclaimable: 139648 kB' 'SUnreclaim: 249260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.564 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 87191460 kB' 'MemUsed: 6527008 kB' 'SwapCached: 0 kB' 'Active: 2382784 kB' 'Inactive: 172020 kB' 'Active(anon): 2148316 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 172020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2222560 kB' 'Mapped: 184460 kB' 'AnonPages: 332360 kB' 'Shmem: 1816072 kB' 'KernelStack: 9048 kB' 'PageTables: 5096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98212 kB' 'Slab: 390632 kB' 'SReclaimable: 98212 kB' 'SUnreclaim: 292420 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.565 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.566 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:19.567 node0=512 expecting 512 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:19.567 node1=1024 expecting 1024 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:19.567 00:04:19.567 real 0m3.063s 00:04:19.567 user 0m1.233s 00:04:19.567 sys 0m1.896s 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.567 10:13:04 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:19.567 ************************************ 00:04:19.567 END TEST custom_alloc 00:04:19.567 ************************************ 00:04:19.567 10:13:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:19.567 10:13:04 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:19.567 10:13:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.567 10:13:04 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.567 10:13:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:19.567 ************************************ 00:04:19.567 START TEST no_shrink_alloc 00:04:19.567 ************************************ 00:04:19.567 10:13:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:19.567 10:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:19.567 10:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:19.567 10:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:19.567 10:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:19.567 10:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:19.567 10:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:19.567 10:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:19.567 10:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:19.567 10:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:19.567 10:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:19.567 10:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:19.567 10:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:19.567 10:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:19.567 10:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:19.567 10:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:19.567 10:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:19.567 10:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:19.567 10:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:19.567 10:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:19.567 10:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:19.567 10:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.567 10:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:22.868 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:22.868 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:22.868 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:22.868 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:22.868 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:22.868 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:22.868 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:22.868 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:22.868 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:22.868 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:22.868 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:22.868 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:22.868 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:22.868 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:22.868 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:22.868 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:22.868 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173981532 kB' 'MemAvailable: 176856408 kB' 'Buffers: 4928 kB' 'Cached: 11755836 kB' 'SwapCached: 0 kB' 'Active: 8771244 kB' 'Inactive: 3508388 kB' 'Active(anon): 8379236 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522048 kB' 'Mapped: 255528 kB' 'Shmem: 7860368 kB' 'KReclaimable: 237860 kB' 'Slab: 779504 kB' 'SReclaimable: 237860 kB' 'SUnreclaim: 541644 kB' 'KernelStack: 20448 kB' 'PageTables: 8320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9869356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315032 kB' 'VmallocChunk: 0 kB' 'Percpu: 71040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2552788 kB' 'DirectMap2M: 10758144 kB' 'DirectMap1G: 188743680 kB' 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.868 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.869 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173982472 kB' 'MemAvailable: 176857348 kB' 'Buffers: 4928 kB' 'Cached: 11755840 kB' 'SwapCached: 0 kB' 'Active: 8771496 kB' 'Inactive: 3508388 kB' 'Active(anon): 8379488 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522300 kB' 'Mapped: 255500 kB' 'Shmem: 7860372 kB' 'KReclaimable: 237860 kB' 'Slab: 779560 kB' 'SReclaimable: 237860 kB' 'SUnreclaim: 541700 kB' 'KernelStack: 20640 kB' 'PageTables: 9316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9870868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315144 kB' 'VmallocChunk: 0 kB' 'Percpu: 71040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2552788 kB' 'DirectMap2M: 10758144 kB' 'DirectMap1G: 188743680 kB' 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.870 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.871 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173980080 kB' 'MemAvailable: 176854956 kB' 'Buffers: 4928 kB' 'Cached: 11755860 kB' 'SwapCached: 0 kB' 'Active: 8771492 kB' 'Inactive: 3508388 kB' 'Active(anon): 8379484 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522260 kB' 'Mapped: 255500 kB' 'Shmem: 7860392 kB' 'KReclaimable: 237860 kB' 'Slab: 779560 kB' 'SReclaimable: 237860 kB' 'SUnreclaim: 541700 kB' 'KernelStack: 20464 kB' 'PageTables: 9120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9870892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315000 kB' 'VmallocChunk: 0 kB' 'Percpu: 71040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2552788 kB' 'DirectMap2M: 10758144 kB' 'DirectMap1G: 188743680 kB' 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.872 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.873 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:22.874 nr_hugepages=1024 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:22.874 resv_hugepages=0 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:22.874 surplus_hugepages=0 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:22.874 anon_hugepages=0 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173980844 kB' 'MemAvailable: 176855720 kB' 'Buffers: 4928 kB' 'Cached: 11755880 kB' 'SwapCached: 0 kB' 'Active: 8771652 kB' 'Inactive: 3508388 kB' 'Active(anon): 8379644 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522432 kB' 'Mapped: 255500 kB' 'Shmem: 7860412 kB' 'KReclaimable: 237860 kB' 'Slab: 779560 kB' 'SReclaimable: 237860 kB' 'SUnreclaim: 541700 kB' 'KernelStack: 20512 kB' 'PageTables: 8704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9870912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315080 kB' 'VmallocChunk: 0 kB' 'Percpu: 71040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2552788 kB' 'DirectMap2M: 10758144 kB' 'DirectMap1G: 188743680 kB' 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.874 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:22.875 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 84720632 kB' 'MemUsed: 12942052 kB' 'SwapCached: 0 kB' 'Active: 6391112 kB' 'Inactive: 3336368 kB' 'Active(anon): 6233572 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3336368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9538164 kB' 'Mapped: 71532 kB' 'AnonPages: 192488 kB' 'Shmem: 6044256 kB' 'KernelStack: 11672 kB' 'PageTables: 4552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139648 kB' 'Slab: 388884 kB' 'SReclaimable: 139648 kB' 'SUnreclaim: 249236 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.876 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:22.877 node0=1024 expecting 1024 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.877 10:13:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:25.415 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:25.415 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:25.415 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:25.415 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:25.415 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:25.415 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:25.415 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:25.415 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:25.415 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:25.415 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:25.415 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:25.415 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:25.415 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:25.415 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:25.415 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:25.415 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:25.415 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:25.415 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173988272 kB' 'MemAvailable: 176863148 kB' 'Buffers: 4928 kB' 'Cached: 11755964 kB' 'SwapCached: 0 kB' 'Active: 8772288 kB' 'Inactive: 3508388 kB' 'Active(anon): 8380280 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523016 kB' 'Mapped: 255516 kB' 'Shmem: 7860496 kB' 'KReclaimable: 237860 kB' 'Slab: 779284 kB' 'SReclaimable: 237860 kB' 'SUnreclaim: 541424 kB' 'KernelStack: 20448 kB' 'PageTables: 8492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9868608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314968 kB' 'VmallocChunk: 0 kB' 'Percpu: 71040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2552788 kB' 'DirectMap2M: 10758144 kB' 'DirectMap1G: 188743680 kB' 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.415 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173989060 kB' 'MemAvailable: 176863928 kB' 'Buffers: 4928 kB' 'Cached: 11755968 kB' 'SwapCached: 0 kB' 'Active: 8772508 kB' 'Inactive: 3508388 kB' 'Active(anon): 8380500 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523268 kB' 'Mapped: 255500 kB' 'Shmem: 7860500 kB' 'KReclaimable: 237844 kB' 'Slab: 779148 kB' 'SReclaimable: 237844 kB' 'SUnreclaim: 541304 kB' 'KernelStack: 20496 kB' 'PageTables: 8948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9868628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314952 kB' 'VmallocChunk: 0 kB' 'Percpu: 71040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2552788 kB' 'DirectMap2M: 10758144 kB' 'DirectMap1G: 188743680 kB' 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.417 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.418 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173989060 kB' 'MemAvailable: 176863928 kB' 'Buffers: 4928 kB' 'Cached: 11755968 kB' 'SwapCached: 0 kB' 'Active: 8773048 kB' 'Inactive: 3508388 kB' 'Active(anon): 8381040 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523320 kB' 'Mapped: 255500 kB' 'Shmem: 7860500 kB' 'KReclaimable: 237844 kB' 'Slab: 779148 kB' 'SReclaimable: 237844 kB' 'SUnreclaim: 541304 kB' 'KernelStack: 20512 kB' 'PageTables: 9008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9868648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314952 kB' 'VmallocChunk: 0 kB' 'Percpu: 71040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2552788 kB' 'DirectMap2M: 10758144 kB' 'DirectMap1G: 188743680 kB' 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.419 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:25.420 nr_hugepages=1024 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:25.420 resv_hugepages=0 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:25.420 surplus_hugepages=0 00:04:25.420 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:25.420 anon_hugepages=0 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173989400 kB' 'MemAvailable: 176864268 kB' 'Buffers: 4928 kB' 'Cached: 11756008 kB' 'SwapCached: 0 kB' 'Active: 8772260 kB' 'Inactive: 3508388 kB' 'Active(anon): 8380252 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522980 kB' 'Mapped: 255500 kB' 'Shmem: 7860540 kB' 'KReclaimable: 237844 kB' 'Slab: 779148 kB' 'SReclaimable: 237844 kB' 'SUnreclaim: 541304 kB' 'KernelStack: 20496 kB' 'PageTables: 8948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9868672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314952 kB' 'VmallocChunk: 0 kB' 'Percpu: 71040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2552788 kB' 'DirectMap2M: 10758144 kB' 'DirectMap1G: 188743680 kB' 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.421 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.422 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.422 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.422 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.422 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.422 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.422 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.422 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.422 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.422 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.422 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.422 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.422 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.422 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.422 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 84718564 kB' 'MemUsed: 12944120 kB' 'SwapCached: 0 kB' 'Active: 6390872 kB' 'Inactive: 3336368 kB' 'Active(anon): 6233332 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3336368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9538188 kB' 'Mapped: 71524 kB' 'AnonPages: 192228 kB' 'Shmem: 6044280 kB' 'KernelStack: 11448 kB' 'PageTables: 3848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139648 kB' 'Slab: 388800 kB' 'SReclaimable: 139648 kB' 'SUnreclaim: 249152 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:25.682 node0=1024 expecting 1024 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:25.682 00:04:25.682 real 0m6.002s 00:04:25.682 user 0m2.412s 00:04:25.682 sys 0m3.723s 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.682 10:13:10 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:25.682 ************************************ 00:04:25.682 END TEST no_shrink_alloc 00:04:25.682 ************************************ 00:04:25.682 10:13:10 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:25.682 10:13:10 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:25.682 10:13:10 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:25.682 10:13:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:25.682 10:13:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.682 10:13:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:25.682 10:13:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.682 10:13:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:25.682 10:13:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:25.682 10:13:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.682 10:13:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:25.682 10:13:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.682 10:13:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:25.682 10:13:10 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:25.682 10:13:10 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:25.682 00:04:25.682 real 0m22.765s 00:04:25.682 user 0m8.997s 00:04:25.682 sys 0m13.505s 00:04:25.682 10:13:10 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.682 10:13:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:25.682 ************************************ 00:04:25.682 END TEST hugepages 00:04:25.682 ************************************ 00:04:25.682 10:13:10 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:25.682 10:13:10 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:25.682 10:13:10 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.682 10:13:10 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.682 10:13:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:25.682 ************************************ 00:04:25.682 START TEST driver 00:04:25.682 ************************************ 00:04:25.682 10:13:10 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:25.683 * Looking for test storage... 00:04:25.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:25.683 10:13:10 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:25.683 10:13:10 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:25.683 10:13:10 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:29.873 10:13:14 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:29.873 10:13:14 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.873 10:13:14 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.873 10:13:14 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:29.873 ************************************ 00:04:29.873 START TEST guess_driver 00:04:29.873 ************************************ 00:04:29.873 10:13:14 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:29.873 10:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:29.873 10:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:29.873 10:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:29.873 10:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:29.873 10:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:29.873 10:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:29.873 10:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:29.873 10:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:29.873 10:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:29.873 10:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:04:29.873 10:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:29.873 10:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:29.873 10:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:29.873 10:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:29.873 10:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:29.873 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:29.873 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:29.873 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:29.873 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:29.873 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:29.873 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:29.873 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:29.873 10:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:29.873 10:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:29.873 10:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:29.873 10:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:29.873 10:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:29.873 Looking for driver=vfio-pci 00:04:29.873 10:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.873 10:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:29.873 10:13:14 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.873 10:13:14 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.165 10:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.736 10:13:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.736 10:13:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.736 10:13:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.736 10:13:18 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:33.736 10:13:18 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:33.736 10:13:18 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:33.736 10:13:18 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:37.933 00:04:37.933 real 0m8.017s 00:04:37.933 user 0m2.368s 00:04:37.933 sys 0m4.060s 00:04:37.933 10:13:22 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.933 10:13:22 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:37.933 ************************************ 00:04:37.933 END TEST guess_driver 00:04:37.933 ************************************ 00:04:37.933 10:13:22 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:37.933 00:04:37.933 real 0m12.274s 00:04:37.933 user 0m3.603s 00:04:37.933 sys 0m6.266s 00:04:37.933 10:13:22 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.933 10:13:22 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:37.933 ************************************ 00:04:37.933 END TEST driver 00:04:37.933 ************************************ 00:04:37.933 10:13:22 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:37.933 10:13:22 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:37.933 10:13:22 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.933 10:13:22 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.933 10:13:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:37.933 ************************************ 00:04:37.933 START TEST devices 00:04:37.933 ************************************ 00:04:37.933 10:13:22 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:38.193 * Looking for test storage... 00:04:38.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:38.193 10:13:22 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:38.193 10:13:22 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:38.193 10:13:22 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:38.193 10:13:23 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:41.482 10:13:26 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:41.482 10:13:26 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:41.482 10:13:26 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:41.482 10:13:26 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:41.482 10:13:26 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:41.482 10:13:26 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:41.482 10:13:26 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:41.482 10:13:26 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:41.482 10:13:26 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:41.482 10:13:26 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:41.482 10:13:26 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:41.482 10:13:26 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:41.482 10:13:26 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:41.482 10:13:26 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:41.482 10:13:26 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:41.482 10:13:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:41.482 10:13:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:41.482 10:13:26 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:04:41.482 10:13:26 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:41.482 10:13:26 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:41.482 10:13:26 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:41.482 10:13:26 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:41.482 No valid GPT data, bailing 00:04:41.482 10:13:26 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:41.482 10:13:26 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:41.482 10:13:26 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:41.482 10:13:26 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:41.482 10:13:26 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:41.482 10:13:26 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:41.482 10:13:26 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:41.482 10:13:26 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:41.482 10:13:26 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:41.482 10:13:26 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:04:41.482 10:13:26 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:41.482 10:13:26 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:41.482 10:13:26 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:41.482 10:13:26 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.482 10:13:26 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.482 10:13:26 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:41.482 ************************************ 00:04:41.482 START TEST nvme_mount 00:04:41.482 ************************************ 00:04:41.482 10:13:26 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:41.482 10:13:26 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:41.482 10:13:26 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:41.482 10:13:26 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:41.482 10:13:26 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:41.482 10:13:26 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:41.482 10:13:26 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:41.482 10:13:26 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:41.482 10:13:26 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:41.482 10:13:26 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:41.482 10:13:26 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:41.482 10:13:26 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:41.482 10:13:26 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:41.482 10:13:26 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:41.482 10:13:26 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:41.482 10:13:26 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:41.482 10:13:26 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:41.482 10:13:26 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:41.482 10:13:26 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:41.482 10:13:26 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:42.457 Creating new GPT entries in memory. 00:04:42.457 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:42.457 other utilities. 00:04:42.457 10:13:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:42.457 10:13:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.457 10:13:27 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:42.457 10:13:27 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:42.457 10:13:27 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:43.398 Creating new GPT entries in memory. 00:04:43.398 The operation has completed successfully. 00:04:43.398 10:13:28 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:43.398 10:13:28 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:43.398 10:13:28 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2189448 00:04:43.398 10:13:28 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.398 10:13:28 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:43.398 10:13:28 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.398 10:13:28 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:43.398 10:13:28 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:43.398 10:13:28 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.658 10:13:28 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:43.658 10:13:28 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:43.658 10:13:28 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:43.658 10:13:28 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.658 10:13:28 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:43.658 10:13:28 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:43.658 10:13:28 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:43.658 10:13:28 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:43.658 10:13:28 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:43.658 10:13:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.658 10:13:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:43.658 10:13:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:43.658 10:13:28 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.658 10:13:28 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.196 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.456 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.456 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:46.456 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.456 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:46.456 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:46.456 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:46.456 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.456 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.456 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:46.456 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:46.456 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:46.456 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:46.456 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:46.715 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:46.715 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:46.715 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:46.715 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:46.715 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:46.715 10:13:31 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:46.715 10:13:31 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.715 10:13:31 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:46.715 10:13:31 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:46.715 10:13:31 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.715 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:46.715 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:46.715 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:46.715 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.715 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:46.715 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:46.715 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:46.715 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:46.715 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:46.715 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.715 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:46.715 10:13:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:46.715 10:13:31 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.715 10:13:31 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.007 10:13:34 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:52.550 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.550 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:52.550 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:52.550 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.550 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.550 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:52.551 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:52.551 00:04:52.551 real 0m11.069s 00:04:52.551 user 0m3.372s 00:04:52.551 sys 0m5.536s 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.551 10:13:37 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:52.551 ************************************ 00:04:52.551 END TEST nvme_mount 00:04:52.551 ************************************ 00:04:52.551 10:13:37 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:52.551 10:13:37 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:52.551 10:13:37 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.551 10:13:37 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.551 10:13:37 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:52.551 ************************************ 00:04:52.551 START TEST dm_mount 00:04:52.551 ************************************ 00:04:52.551 10:13:37 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:52.551 10:13:37 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:52.551 10:13:37 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:52.551 10:13:37 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:52.551 10:13:37 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:52.551 10:13:37 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:52.551 10:13:37 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:52.551 10:13:37 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:52.551 10:13:37 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:52.551 10:13:37 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:52.551 10:13:37 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:52.551 10:13:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:52.551 10:13:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:52.551 10:13:37 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:52.551 10:13:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:52.551 10:13:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:52.551 10:13:37 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:52.551 10:13:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:52.551 10:13:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:52.551 10:13:37 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:52.551 10:13:37 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:52.551 10:13:37 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:53.488 Creating new GPT entries in memory. 00:04:53.488 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:53.488 other utilities. 00:04:53.488 10:13:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:53.488 10:13:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:53.488 10:13:38 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:53.488 10:13:38 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:53.488 10:13:38 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:54.867 Creating new GPT entries in memory. 00:04:54.867 The operation has completed successfully. 00:04:54.867 10:13:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:54.867 10:13:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:54.867 10:13:39 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:54.867 10:13:39 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:54.867 10:13:39 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:55.805 The operation has completed successfully. 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2193639 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.805 10:13:40 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:58.341 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:58.341 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:58.342 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.600 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:58.600 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:58.600 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.600 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:58.600 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:58.600 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.600 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:58.600 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:58.600 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:58.600 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:58.600 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:58.600 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:58.600 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:58.600 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:58.600 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.600 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:58.600 10:13:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:58.600 10:13:43 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.600 10:13:43 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.133 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.393 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:01.393 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:01.393 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:01.393 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:01.393 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:01.393 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:01.393 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:01.393 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.393 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:01.393 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:01.393 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:01.393 10:13:46 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:01.393 00:05:01.393 real 0m8.917s 00:05:01.393 user 0m2.195s 00:05:01.393 sys 0m3.748s 00:05:01.393 10:13:46 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.393 10:13:46 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:01.393 ************************************ 00:05:01.393 END TEST dm_mount 00:05:01.393 ************************************ 00:05:01.393 10:13:46 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:01.393 10:13:46 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:01.393 10:13:46 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:01.393 10:13:46 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:01.393 10:13:46 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.393 10:13:46 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:01.393 10:13:46 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:01.393 10:13:46 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:01.652 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:01.652 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:01.652 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:01.652 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:01.652 10:13:46 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:01.652 10:13:46 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:01.652 10:13:46 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:01.652 10:13:46 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.652 10:13:46 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:01.652 10:13:46 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:01.652 10:13:46 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:01.652 00:05:01.652 real 0m23.716s 00:05:01.652 user 0m6.894s 00:05:01.652 sys 0m11.565s 00:05:01.652 10:13:46 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.652 10:13:46 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:01.652 ************************************ 00:05:01.652 END TEST devices 00:05:01.652 ************************************ 00:05:01.911 10:13:46 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:01.911 00:05:01.911 real 1m19.628s 00:05:01.911 user 0m26.515s 00:05:01.911 sys 0m43.885s 00:05:01.911 10:13:46 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.911 10:13:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:01.911 ************************************ 00:05:01.911 END TEST setup.sh 00:05:01.911 ************************************ 00:05:01.911 10:13:46 -- common/autotest_common.sh@1142 -- # return 0 00:05:01.911 10:13:46 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:04.479 Hugepages 00:05:04.479 node hugesize free / total 00:05:04.479 node0 1048576kB 0 / 0 00:05:04.479 node0 2048kB 2048 / 2048 00:05:04.479 node1 1048576kB 0 / 0 00:05:04.479 node1 2048kB 0 / 0 00:05:04.479 00:05:04.479 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:04.479 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:04.479 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:04.479 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:04.479 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:04.479 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:04.737 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:04.737 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:04.737 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:04.737 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:04.737 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:04.737 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:04.737 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:04.737 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:04.737 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:04.737 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:04.737 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:04.737 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:04.737 10:13:49 -- spdk/autotest.sh@130 -- # uname -s 00:05:04.737 10:13:49 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:04.737 10:13:49 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:04.737 10:13:49 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:08.029 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:08.029 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:08.029 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:08.029 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:08.029 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:08.029 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:08.029 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:08.029 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:08.029 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:08.029 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:08.029 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:08.029 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:08.029 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:08.029 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:08.029 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:08.029 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:08.597 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:08.597 10:13:53 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:09.536 10:13:54 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:09.536 10:13:54 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:09.536 10:13:54 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:09.536 10:13:54 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:09.536 10:13:54 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:09.536 10:13:54 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:09.536 10:13:54 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:09.536 10:13:54 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:09.536 10:13:54 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:09.795 10:13:54 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:09.795 10:13:54 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:05:09.795 10:13:54 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:12.330 Waiting for block devices as requested 00:05:12.330 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:05:12.589 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:12.589 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:12.589 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:12.849 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:12.849 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:12.849 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:13.108 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:13.108 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:13.108 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:13.108 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:13.367 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:13.367 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:13.367 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:13.625 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:13.625 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:13.626 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:13.884 10:13:58 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:13.884 10:13:58 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:05:13.885 10:13:58 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:13.885 10:13:58 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:05:13.885 10:13:58 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:13.885 10:13:58 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:05:13.885 10:13:58 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:13.885 10:13:58 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:13.885 10:13:58 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:13.885 10:13:58 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:13.885 10:13:58 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:13.885 10:13:58 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:13.885 10:13:58 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:13.885 10:13:58 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:13.885 10:13:58 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:13.885 10:13:58 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:13.885 10:13:58 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:13.885 10:13:58 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:13.885 10:13:58 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:13.885 10:13:58 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:13.885 10:13:58 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:13.885 10:13:58 -- common/autotest_common.sh@1557 -- # continue 00:05:13.885 10:13:58 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:13.885 10:13:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:13.885 10:13:58 -- common/autotest_common.sh@10 -- # set +x 00:05:13.885 10:13:58 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:13.885 10:13:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:13.885 10:13:58 -- common/autotest_common.sh@10 -- # set +x 00:05:13.885 10:13:58 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:17.177 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:17.177 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:17.177 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:17.177 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:17.177 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:17.177 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:17.177 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:17.177 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:17.177 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:17.177 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:17.177 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:17.177 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:17.177 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:17.177 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:17.177 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:17.177 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:17.437 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:17.696 10:14:02 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:17.696 10:14:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:17.696 10:14:02 -- common/autotest_common.sh@10 -- # set +x 00:05:17.696 10:14:02 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:17.696 10:14:02 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:17.696 10:14:02 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:17.696 10:14:02 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:17.696 10:14:02 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:17.696 10:14:02 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:17.696 10:14:02 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:17.696 10:14:02 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:17.696 10:14:02 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:17.696 10:14:02 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:17.696 10:14:02 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:17.696 10:14:02 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:17.696 10:14:02 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:05:17.696 10:14:02 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:17.696 10:14:02 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:05:17.696 10:14:02 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:17.696 10:14:02 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:17.696 10:14:02 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:17.696 10:14:02 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:05:17.696 10:14:02 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:05:17.696 10:14:02 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2202432 00:05:17.696 10:14:02 -- common/autotest_common.sh@1598 -- # waitforlisten 2202432 00:05:17.696 10:14:02 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.696 10:14:02 -- common/autotest_common.sh@829 -- # '[' -z 2202432 ']' 00:05:17.696 10:14:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.696 10:14:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.696 10:14:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.696 10:14:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.696 10:14:02 -- common/autotest_common.sh@10 -- # set +x 00:05:17.696 [2024-07-14 10:14:02.669104] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:17.696 [2024-07-14 10:14:02.669149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2202432 ] 00:05:17.956 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.956 [2024-07-14 10:14:02.738967] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.956 [2024-07-14 10:14:02.780610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.216 10:14:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.216 10:14:02 -- common/autotest_common.sh@862 -- # return 0 00:05:18.216 10:14:02 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:18.216 10:14:02 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:18.216 10:14:02 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:05:21.505 nvme0n1 00:05:21.505 10:14:05 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:21.505 [2024-07-14 10:14:06.108360] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:21.505 request: 00:05:21.505 { 00:05:21.505 "nvme_ctrlr_name": "nvme0", 00:05:21.505 "password": "test", 00:05:21.505 "method": "bdev_nvme_opal_revert", 00:05:21.505 "req_id": 1 00:05:21.505 } 00:05:21.505 Got JSON-RPC error response 00:05:21.505 response: 00:05:21.505 { 00:05:21.505 "code": -32602, 00:05:21.505 "message": "Invalid parameters" 00:05:21.505 } 00:05:21.505 10:14:06 -- common/autotest_common.sh@1604 -- # true 00:05:21.505 10:14:06 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:21.505 10:14:06 -- common/autotest_common.sh@1608 -- # killprocess 2202432 00:05:21.505 10:14:06 -- common/autotest_common.sh@948 -- # '[' -z 2202432 ']' 00:05:21.505 10:14:06 -- common/autotest_common.sh@952 -- # kill -0 2202432 00:05:21.505 10:14:06 -- common/autotest_common.sh@953 -- # uname 00:05:21.505 10:14:06 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:21.505 10:14:06 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2202432 00:05:21.505 10:14:06 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:21.505 10:14:06 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:21.505 10:14:06 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2202432' 00:05:21.505 killing process with pid 2202432 00:05:21.505 10:14:06 -- common/autotest_common.sh@967 -- # kill 2202432 00:05:21.505 10:14:06 -- common/autotest_common.sh@972 -- # wait 2202432 00:05:22.884 10:14:07 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:22.884 10:14:07 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:22.884 10:14:07 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:22.884 10:14:07 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:22.884 10:14:07 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:22.884 10:14:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:22.884 10:14:07 -- common/autotest_common.sh@10 -- # set +x 00:05:22.884 10:14:07 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:22.884 10:14:07 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:22.884 10:14:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.884 10:14:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.884 10:14:07 -- common/autotest_common.sh@10 -- # set +x 00:05:22.884 ************************************ 00:05:22.884 START TEST env 00:05:22.884 ************************************ 00:05:22.884 10:14:07 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:23.144 * Looking for test storage... 00:05:23.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:23.144 10:14:07 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:23.144 10:14:07 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.144 10:14:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.144 10:14:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.144 ************************************ 00:05:23.144 START TEST env_memory 00:05:23.144 ************************************ 00:05:23.144 10:14:07 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:23.144 00:05:23.144 00:05:23.144 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.144 http://cunit.sourceforge.net/ 00:05:23.144 00:05:23.144 00:05:23.144 Suite: memory 00:05:23.144 Test: alloc and free memory map ...[2024-07-14 10:14:07.960235] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:23.144 passed 00:05:23.144 Test: mem map translation ...[2024-07-14 10:14:07.979666] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:23.144 [2024-07-14 10:14:07.979681] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:23.144 [2024-07-14 10:14:07.979718] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:23.144 [2024-07-14 10:14:07.979725] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:23.144 passed 00:05:23.144 Test: mem map registration ...[2024-07-14 10:14:08.019575] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:23.144 [2024-07-14 10:14:08.019589] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:23.144 passed 00:05:23.144 Test: mem map adjacent registrations ...passed 00:05:23.144 00:05:23.144 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.144 suites 1 1 n/a 0 0 00:05:23.144 tests 4 4 4 0 0 00:05:23.144 asserts 152 152 152 0 n/a 00:05:23.144 00:05:23.144 Elapsed time = 0.143 seconds 00:05:23.144 00:05:23.144 real 0m0.155s 00:05:23.144 user 0m0.146s 00:05:23.144 sys 0m0.008s 00:05:23.144 10:14:08 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.144 10:14:08 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:23.144 ************************************ 00:05:23.144 END TEST env_memory 00:05:23.144 ************************************ 00:05:23.144 10:14:08 env -- common/autotest_common.sh@1142 -- # return 0 00:05:23.144 10:14:08 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:23.144 10:14:08 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.144 10:14:08 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.144 10:14:08 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.404 ************************************ 00:05:23.404 START TEST env_vtophys 00:05:23.404 ************************************ 00:05:23.404 10:14:08 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:23.404 EAL: lib.eal log level changed from notice to debug 00:05:23.404 EAL: Detected lcore 0 as core 0 on socket 0 00:05:23.404 EAL: Detected lcore 1 as core 1 on socket 0 00:05:23.404 EAL: Detected lcore 2 as core 2 on socket 0 00:05:23.404 EAL: Detected lcore 3 as core 3 on socket 0 00:05:23.404 EAL: Detected lcore 4 as core 4 on socket 0 00:05:23.404 EAL: Detected lcore 5 as core 5 on socket 0 00:05:23.404 EAL: Detected lcore 6 as core 6 on socket 0 00:05:23.404 EAL: Detected lcore 7 as core 8 on socket 0 00:05:23.404 EAL: Detected lcore 8 as core 9 on socket 0 00:05:23.404 EAL: Detected lcore 9 as core 10 on socket 0 00:05:23.404 EAL: Detected lcore 10 as core 11 on socket 0 00:05:23.404 EAL: Detected lcore 11 as core 12 on socket 0 00:05:23.404 EAL: Detected lcore 12 as core 13 on socket 0 00:05:23.404 EAL: Detected lcore 13 as core 16 on socket 0 00:05:23.404 EAL: Detected lcore 14 as core 17 on socket 0 00:05:23.404 EAL: Detected lcore 15 as core 18 on socket 0 00:05:23.404 EAL: Detected lcore 16 as core 19 on socket 0 00:05:23.404 EAL: Detected lcore 17 as core 20 on socket 0 00:05:23.404 EAL: Detected lcore 18 as core 21 on socket 0 00:05:23.404 EAL: Detected lcore 19 as core 25 on socket 0 00:05:23.404 EAL: Detected lcore 20 as core 26 on socket 0 00:05:23.404 EAL: Detected lcore 21 as core 27 on socket 0 00:05:23.404 EAL: Detected lcore 22 as core 28 on socket 0 00:05:23.404 EAL: Detected lcore 23 as core 29 on socket 0 00:05:23.404 EAL: Detected lcore 24 as core 0 on socket 1 00:05:23.404 EAL: Detected lcore 25 as core 1 on socket 1 00:05:23.405 EAL: Detected lcore 26 as core 2 on socket 1 00:05:23.405 EAL: Detected lcore 27 as core 3 on socket 1 00:05:23.405 EAL: Detected lcore 28 as core 4 on socket 1 00:05:23.405 EAL: Detected lcore 29 as core 5 on socket 1 00:05:23.405 EAL: Detected lcore 30 as core 6 on socket 1 00:05:23.405 EAL: Detected lcore 31 as core 9 on socket 1 00:05:23.405 EAL: Detected lcore 32 as core 10 on socket 1 00:05:23.405 EAL: Detected lcore 33 as core 11 on socket 1 00:05:23.405 EAL: Detected lcore 34 as core 12 on socket 1 00:05:23.405 EAL: Detected lcore 35 as core 13 on socket 1 00:05:23.405 EAL: Detected lcore 36 as core 16 on socket 1 00:05:23.405 EAL: Detected lcore 37 as core 17 on socket 1 00:05:23.405 EAL: Detected lcore 38 as core 18 on socket 1 00:05:23.405 EAL: Detected lcore 39 as core 19 on socket 1 00:05:23.405 EAL: Detected lcore 40 as core 20 on socket 1 00:05:23.405 EAL: Detected lcore 41 as core 21 on socket 1 00:05:23.405 EAL: Detected lcore 42 as core 24 on socket 1 00:05:23.405 EAL: Detected lcore 43 as core 25 on socket 1 00:05:23.405 EAL: Detected lcore 44 as core 26 on socket 1 00:05:23.405 EAL: Detected lcore 45 as core 27 on socket 1 00:05:23.405 EAL: Detected lcore 46 as core 28 on socket 1 00:05:23.405 EAL: Detected lcore 47 as core 29 on socket 1 00:05:23.405 EAL: Detected lcore 48 as core 0 on socket 0 00:05:23.405 EAL: Detected lcore 49 as core 1 on socket 0 00:05:23.405 EAL: Detected lcore 50 as core 2 on socket 0 00:05:23.405 EAL: Detected lcore 51 as core 3 on socket 0 00:05:23.405 EAL: Detected lcore 52 as core 4 on socket 0 00:05:23.405 EAL: Detected lcore 53 as core 5 on socket 0 00:05:23.405 EAL: Detected lcore 54 as core 6 on socket 0 00:05:23.405 EAL: Detected lcore 55 as core 8 on socket 0 00:05:23.405 EAL: Detected lcore 56 as core 9 on socket 0 00:05:23.405 EAL: Detected lcore 57 as core 10 on socket 0 00:05:23.405 EAL: Detected lcore 58 as core 11 on socket 0 00:05:23.405 EAL: Detected lcore 59 as core 12 on socket 0 00:05:23.405 EAL: Detected lcore 60 as core 13 on socket 0 00:05:23.405 EAL: Detected lcore 61 as core 16 on socket 0 00:05:23.405 EAL: Detected lcore 62 as core 17 on socket 0 00:05:23.405 EAL: Detected lcore 63 as core 18 on socket 0 00:05:23.405 EAL: Detected lcore 64 as core 19 on socket 0 00:05:23.405 EAL: Detected lcore 65 as core 20 on socket 0 00:05:23.405 EAL: Detected lcore 66 as core 21 on socket 0 00:05:23.405 EAL: Detected lcore 67 as core 25 on socket 0 00:05:23.405 EAL: Detected lcore 68 as core 26 on socket 0 00:05:23.405 EAL: Detected lcore 69 as core 27 on socket 0 00:05:23.405 EAL: Detected lcore 70 as core 28 on socket 0 00:05:23.405 EAL: Detected lcore 71 as core 29 on socket 0 00:05:23.405 EAL: Detected lcore 72 as core 0 on socket 1 00:05:23.405 EAL: Detected lcore 73 as core 1 on socket 1 00:05:23.405 EAL: Detected lcore 74 as core 2 on socket 1 00:05:23.405 EAL: Detected lcore 75 as core 3 on socket 1 00:05:23.405 EAL: Detected lcore 76 as core 4 on socket 1 00:05:23.405 EAL: Detected lcore 77 as core 5 on socket 1 00:05:23.405 EAL: Detected lcore 78 as core 6 on socket 1 00:05:23.405 EAL: Detected lcore 79 as core 9 on socket 1 00:05:23.405 EAL: Detected lcore 80 as core 10 on socket 1 00:05:23.405 EAL: Detected lcore 81 as core 11 on socket 1 00:05:23.405 EAL: Detected lcore 82 as core 12 on socket 1 00:05:23.405 EAL: Detected lcore 83 as core 13 on socket 1 00:05:23.405 EAL: Detected lcore 84 as core 16 on socket 1 00:05:23.405 EAL: Detected lcore 85 as core 17 on socket 1 00:05:23.405 EAL: Detected lcore 86 as core 18 on socket 1 00:05:23.405 EAL: Detected lcore 87 as core 19 on socket 1 00:05:23.405 EAL: Detected lcore 88 as core 20 on socket 1 00:05:23.405 EAL: Detected lcore 89 as core 21 on socket 1 00:05:23.405 EAL: Detected lcore 90 as core 24 on socket 1 00:05:23.405 EAL: Detected lcore 91 as core 25 on socket 1 00:05:23.405 EAL: Detected lcore 92 as core 26 on socket 1 00:05:23.405 EAL: Detected lcore 93 as core 27 on socket 1 00:05:23.405 EAL: Detected lcore 94 as core 28 on socket 1 00:05:23.405 EAL: Detected lcore 95 as core 29 on socket 1 00:05:23.405 EAL: Maximum logical cores by configuration: 128 00:05:23.405 EAL: Detected CPU lcores: 96 00:05:23.405 EAL: Detected NUMA nodes: 2 00:05:23.405 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:23.405 EAL: Detected shared linkage of DPDK 00:05:23.405 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:23.405 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:23.405 EAL: Registered [vdev] bus. 00:05:23.405 EAL: bus.vdev log level changed from disabled to notice 00:05:23.405 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:23.405 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:23.405 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:23.405 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:23.405 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:23.405 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:23.405 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:23.405 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:23.405 EAL: No shared files mode enabled, IPC will be disabled 00:05:23.405 EAL: No shared files mode enabled, IPC is disabled 00:05:23.405 EAL: Bus pci wants IOVA as 'DC' 00:05:23.405 EAL: Bus vdev wants IOVA as 'DC' 00:05:23.405 EAL: Buses did not request a specific IOVA mode. 00:05:23.405 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:23.405 EAL: Selected IOVA mode 'VA' 00:05:23.405 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.405 EAL: Probing VFIO support... 00:05:23.405 EAL: IOMMU type 1 (Type 1) is supported 00:05:23.405 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:23.405 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:23.405 EAL: VFIO support initialized 00:05:23.405 EAL: Ask a virtual area of 0x2e000 bytes 00:05:23.405 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:23.405 EAL: Setting up physically contiguous memory... 00:05:23.405 EAL: Setting maximum number of open files to 524288 00:05:23.405 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:23.405 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:23.405 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:23.405 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.405 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:23.405 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:23.405 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.405 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:23.405 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:23.405 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.405 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:23.405 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:23.405 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.405 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:23.405 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:23.405 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.405 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:23.405 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:23.405 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.405 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:23.405 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:23.405 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.405 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:23.405 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:23.405 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.405 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:23.405 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:23.405 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:23.405 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.405 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:23.405 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:23.405 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.405 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:23.405 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:23.405 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.405 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:23.405 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:23.405 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.405 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:23.405 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:23.405 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.405 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:23.405 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:23.405 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.405 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:23.405 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:23.405 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.405 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:23.405 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:23.405 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.405 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:23.405 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:23.405 EAL: Hugepages will be freed exactly as allocated. 00:05:23.405 EAL: No shared files mode enabled, IPC is disabled 00:05:23.405 EAL: No shared files mode enabled, IPC is disabled 00:05:23.405 EAL: TSC frequency is ~2300000 KHz 00:05:23.405 EAL: Main lcore 0 is ready (tid=7f750fab2a00;cpuset=[0]) 00:05:23.405 EAL: Trying to obtain current memory policy. 00:05:23.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.405 EAL: Restoring previous memory policy: 0 00:05:23.405 EAL: request: mp_malloc_sync 00:05:23.405 EAL: No shared files mode enabled, IPC is disabled 00:05:23.405 EAL: Heap on socket 0 was expanded by 2MB 00:05:23.405 EAL: PCI device 0000:3d:00.0 on NUMA socket 0 00:05:23.405 EAL: probe driver: 8086:37d2 net_i40e 00:05:23.405 EAL: Not managed by a supported kernel driver, skipped 00:05:23.405 EAL: PCI device 0000:3d:00.1 on NUMA socket 0 00:05:23.405 EAL: probe driver: 8086:37d2 net_i40e 00:05:23.405 EAL: Not managed by a supported kernel driver, skipped 00:05:23.405 EAL: No shared files mode enabled, IPC is disabled 00:05:23.405 EAL: No shared files mode enabled, IPC is disabled 00:05:23.405 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:23.405 EAL: Mem event callback 'spdk:(nil)' registered 00:05:23.405 00:05:23.405 00:05:23.405 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.405 http://cunit.sourceforge.net/ 00:05:23.405 00:05:23.405 00:05:23.405 Suite: components_suite 00:05:23.405 Test: vtophys_malloc_test ...passed 00:05:23.405 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:23.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.405 EAL: Restoring previous memory policy: 4 00:05:23.405 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.405 EAL: request: mp_malloc_sync 00:05:23.406 EAL: No shared files mode enabled, IPC is disabled 00:05:23.406 EAL: Heap on socket 0 was expanded by 4MB 00:05:23.406 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.406 EAL: request: mp_malloc_sync 00:05:23.406 EAL: No shared files mode enabled, IPC is disabled 00:05:23.406 EAL: Heap on socket 0 was shrunk by 4MB 00:05:23.406 EAL: Trying to obtain current memory policy. 00:05:23.406 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.406 EAL: Restoring previous memory policy: 4 00:05:23.406 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.406 EAL: request: mp_malloc_sync 00:05:23.406 EAL: No shared files mode enabled, IPC is disabled 00:05:23.406 EAL: Heap on socket 0 was expanded by 6MB 00:05:23.406 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.406 EAL: request: mp_malloc_sync 00:05:23.406 EAL: No shared files mode enabled, IPC is disabled 00:05:23.406 EAL: Heap on socket 0 was shrunk by 6MB 00:05:23.406 EAL: Trying to obtain current memory policy. 00:05:23.406 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.406 EAL: Restoring previous memory policy: 4 00:05:23.406 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.406 EAL: request: mp_malloc_sync 00:05:23.406 EAL: No shared files mode enabled, IPC is disabled 00:05:23.406 EAL: Heap on socket 0 was expanded by 10MB 00:05:23.406 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.406 EAL: request: mp_malloc_sync 00:05:23.406 EAL: No shared files mode enabled, IPC is disabled 00:05:23.406 EAL: Heap on socket 0 was shrunk by 10MB 00:05:23.406 EAL: Trying to obtain current memory policy. 00:05:23.406 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.406 EAL: Restoring previous memory policy: 4 00:05:23.406 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.406 EAL: request: mp_malloc_sync 00:05:23.406 EAL: No shared files mode enabled, IPC is disabled 00:05:23.406 EAL: Heap on socket 0 was expanded by 18MB 00:05:23.406 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.406 EAL: request: mp_malloc_sync 00:05:23.406 EAL: No shared files mode enabled, IPC is disabled 00:05:23.406 EAL: Heap on socket 0 was shrunk by 18MB 00:05:23.406 EAL: Trying to obtain current memory policy. 00:05:23.406 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.406 EAL: Restoring previous memory policy: 4 00:05:23.406 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.406 EAL: request: mp_malloc_sync 00:05:23.406 EAL: No shared files mode enabled, IPC is disabled 00:05:23.406 EAL: Heap on socket 0 was expanded by 34MB 00:05:23.406 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.406 EAL: request: mp_malloc_sync 00:05:23.406 EAL: No shared files mode enabled, IPC is disabled 00:05:23.406 EAL: Heap on socket 0 was shrunk by 34MB 00:05:23.406 EAL: Trying to obtain current memory policy. 00:05:23.406 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.406 EAL: Restoring previous memory policy: 4 00:05:23.406 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.406 EAL: request: mp_malloc_sync 00:05:23.406 EAL: No shared files mode enabled, IPC is disabled 00:05:23.406 EAL: Heap on socket 0 was expanded by 66MB 00:05:23.406 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.406 EAL: request: mp_malloc_sync 00:05:23.406 EAL: No shared files mode enabled, IPC is disabled 00:05:23.406 EAL: Heap on socket 0 was shrunk by 66MB 00:05:23.406 EAL: Trying to obtain current memory policy. 00:05:23.406 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.406 EAL: Restoring previous memory policy: 4 00:05:23.406 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.406 EAL: request: mp_malloc_sync 00:05:23.406 EAL: No shared files mode enabled, IPC is disabled 00:05:23.406 EAL: Heap on socket 0 was expanded by 130MB 00:05:23.406 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.406 EAL: request: mp_malloc_sync 00:05:23.406 EAL: No shared files mode enabled, IPC is disabled 00:05:23.406 EAL: Heap on socket 0 was shrunk by 130MB 00:05:23.406 EAL: Trying to obtain current memory policy. 00:05:23.406 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.406 EAL: Restoring previous memory policy: 4 00:05:23.406 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.406 EAL: request: mp_malloc_sync 00:05:23.406 EAL: No shared files mode enabled, IPC is disabled 00:05:23.406 EAL: Heap on socket 0 was expanded by 258MB 00:05:23.666 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.666 EAL: request: mp_malloc_sync 00:05:23.666 EAL: No shared files mode enabled, IPC is disabled 00:05:23.666 EAL: Heap on socket 0 was shrunk by 258MB 00:05:23.666 EAL: Trying to obtain current memory policy. 00:05:23.666 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.666 EAL: Restoring previous memory policy: 4 00:05:23.666 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.666 EAL: request: mp_malloc_sync 00:05:23.666 EAL: No shared files mode enabled, IPC is disabled 00:05:23.666 EAL: Heap on socket 0 was expanded by 514MB 00:05:23.666 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.964 EAL: request: mp_malloc_sync 00:05:23.964 EAL: No shared files mode enabled, IPC is disabled 00:05:23.964 EAL: Heap on socket 0 was shrunk by 514MB 00:05:23.964 EAL: Trying to obtain current memory policy. 00:05:23.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.964 EAL: Restoring previous memory policy: 4 00:05:23.964 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.964 EAL: request: mp_malloc_sync 00:05:23.964 EAL: No shared files mode enabled, IPC is disabled 00:05:23.964 EAL: Heap on socket 0 was expanded by 1026MB 00:05:24.223 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.482 EAL: request: mp_malloc_sync 00:05:24.482 EAL: No shared files mode enabled, IPC is disabled 00:05:24.482 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:24.482 passed 00:05:24.482 00:05:24.482 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.482 suites 1 1 n/a 0 0 00:05:24.482 tests 2 2 2 0 0 00:05:24.482 asserts 497 497 497 0 n/a 00:05:24.482 00:05:24.482 Elapsed time = 0.973 seconds 00:05:24.482 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.482 EAL: request: mp_malloc_sync 00:05:24.482 EAL: No shared files mode enabled, IPC is disabled 00:05:24.482 EAL: Heap on socket 0 was shrunk by 2MB 00:05:24.482 EAL: No shared files mode enabled, IPC is disabled 00:05:24.482 EAL: No shared files mode enabled, IPC is disabled 00:05:24.482 EAL: No shared files mode enabled, IPC is disabled 00:05:24.482 00:05:24.482 real 0m1.092s 00:05:24.482 user 0m0.638s 00:05:24.482 sys 0m0.427s 00:05:24.482 10:14:09 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.482 10:14:09 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:24.482 ************************************ 00:05:24.482 END TEST env_vtophys 00:05:24.482 ************************************ 00:05:24.482 10:14:09 env -- common/autotest_common.sh@1142 -- # return 0 00:05:24.482 10:14:09 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:24.482 10:14:09 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.482 10:14:09 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.482 10:14:09 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.482 ************************************ 00:05:24.482 START TEST env_pci 00:05:24.482 ************************************ 00:05:24.482 10:14:09 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:24.482 00:05:24.482 00:05:24.482 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.482 http://cunit.sourceforge.net/ 00:05:24.482 00:05:24.482 00:05:24.482 Suite: pci 00:05:24.482 Test: pci_hook ...[2024-07-14 10:14:09.313823] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2203535 has claimed it 00:05:24.482 EAL: Cannot find device (10000:00:01.0) 00:05:24.482 EAL: Failed to attach device on primary process 00:05:24.482 passed 00:05:24.482 00:05:24.482 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.482 suites 1 1 n/a 0 0 00:05:24.482 tests 1 1 1 0 0 00:05:24.482 asserts 25 25 25 0 n/a 00:05:24.482 00:05:24.482 Elapsed time = 0.026 seconds 00:05:24.482 00:05:24.482 real 0m0.045s 00:05:24.482 user 0m0.012s 00:05:24.482 sys 0m0.032s 00:05:24.482 10:14:09 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.482 10:14:09 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:24.482 ************************************ 00:05:24.482 END TEST env_pci 00:05:24.482 ************************************ 00:05:24.482 10:14:09 env -- common/autotest_common.sh@1142 -- # return 0 00:05:24.482 10:14:09 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:24.482 10:14:09 env -- env/env.sh@15 -- # uname 00:05:24.482 10:14:09 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:24.482 10:14:09 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:24.482 10:14:09 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:24.482 10:14:09 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:24.482 10:14:09 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.482 10:14:09 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.482 ************************************ 00:05:24.482 START TEST env_dpdk_post_init 00:05:24.482 ************************************ 00:05:24.482 10:14:09 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:24.482 EAL: Detected CPU lcores: 96 00:05:24.482 EAL: Detected NUMA nodes: 2 00:05:24.482 EAL: Detected shared linkage of DPDK 00:05:24.482 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:24.482 EAL: Selected IOVA mode 'VA' 00:05:24.482 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.741 EAL: VFIO support initialized 00:05:24.741 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:24.741 EAL: Using IOMMU type 1 (Type 1) 00:05:24.741 EAL: Ignore mapping IO port bar(1) 00:05:24.741 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:24.741 EAL: Ignore mapping IO port bar(1) 00:05:24.741 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:24.741 EAL: Ignore mapping IO port bar(1) 00:05:24.741 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:24.741 EAL: Ignore mapping IO port bar(1) 00:05:24.741 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:24.741 EAL: Ignore mapping IO port bar(1) 00:05:24.741 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:24.741 EAL: Ignore mapping IO port bar(1) 00:05:24.741 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:24.741 EAL: Ignore mapping IO port bar(1) 00:05:24.741 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:24.741 EAL: Ignore mapping IO port bar(1) 00:05:24.741 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:25.678 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:05:25.678 EAL: Ignore mapping IO port bar(1) 00:05:25.678 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:25.678 EAL: Ignore mapping IO port bar(1) 00:05:25.678 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:25.678 EAL: Ignore mapping IO port bar(1) 00:05:25.678 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:25.678 EAL: Ignore mapping IO port bar(1) 00:05:25.678 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:25.678 EAL: Ignore mapping IO port bar(1) 00:05:25.678 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:25.678 EAL: Ignore mapping IO port bar(1) 00:05:25.678 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:25.678 EAL: Ignore mapping IO port bar(1) 00:05:25.678 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:25.678 EAL: Ignore mapping IO port bar(1) 00:05:25.678 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:28.967 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:28.967 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:05:28.967 Starting DPDK initialization... 00:05:28.967 Starting SPDK post initialization... 00:05:28.967 SPDK NVMe probe 00:05:28.967 Attaching to 0000:5e:00.0 00:05:28.967 Attached to 0000:5e:00.0 00:05:28.967 Cleaning up... 00:05:28.967 00:05:28.967 real 0m4.320s 00:05:28.967 user 0m3.265s 00:05:28.967 sys 0m0.130s 00:05:28.967 10:14:13 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.967 10:14:13 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:28.967 ************************************ 00:05:28.967 END TEST env_dpdk_post_init 00:05:28.967 ************************************ 00:05:28.967 10:14:13 env -- common/autotest_common.sh@1142 -- # return 0 00:05:28.967 10:14:13 env -- env/env.sh@26 -- # uname 00:05:28.967 10:14:13 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:28.967 10:14:13 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:28.967 10:14:13 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.967 10:14:13 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.967 10:14:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.967 ************************************ 00:05:28.967 START TEST env_mem_callbacks 00:05:28.967 ************************************ 00:05:28.967 10:14:13 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:28.967 EAL: Detected CPU lcores: 96 00:05:28.967 EAL: Detected NUMA nodes: 2 00:05:28.967 EAL: Detected shared linkage of DPDK 00:05:28.967 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:28.967 EAL: Selected IOVA mode 'VA' 00:05:28.967 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.967 EAL: VFIO support initialized 00:05:28.967 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:28.967 00:05:28.967 00:05:28.967 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.967 http://cunit.sourceforge.net/ 00:05:28.967 00:05:28.967 00:05:28.967 Suite: memory 00:05:28.967 Test: test ... 00:05:28.967 register 0x200000200000 2097152 00:05:28.967 malloc 3145728 00:05:28.967 register 0x200000400000 4194304 00:05:28.967 buf 0x200000500000 len 3145728 PASSED 00:05:28.967 malloc 64 00:05:28.967 buf 0x2000004fff40 len 64 PASSED 00:05:28.967 malloc 4194304 00:05:28.967 register 0x200000800000 6291456 00:05:28.967 buf 0x200000a00000 len 4194304 PASSED 00:05:28.967 free 0x200000500000 3145728 00:05:28.967 free 0x2000004fff40 64 00:05:28.967 unregister 0x200000400000 4194304 PASSED 00:05:28.967 free 0x200000a00000 4194304 00:05:28.967 unregister 0x200000800000 6291456 PASSED 00:05:28.967 malloc 8388608 00:05:28.967 register 0x200000400000 10485760 00:05:28.967 buf 0x200000600000 len 8388608 PASSED 00:05:28.967 free 0x200000600000 8388608 00:05:28.967 unregister 0x200000400000 10485760 PASSED 00:05:28.967 passed 00:05:28.967 00:05:28.967 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.967 suites 1 1 n/a 0 0 00:05:28.967 tests 1 1 1 0 0 00:05:28.967 asserts 15 15 15 0 n/a 00:05:28.967 00:05:28.967 Elapsed time = 0.008 seconds 00:05:28.967 00:05:28.967 real 0m0.056s 00:05:28.967 user 0m0.019s 00:05:28.967 sys 0m0.036s 00:05:28.967 10:14:13 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.967 10:14:13 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:28.967 ************************************ 00:05:28.967 END TEST env_mem_callbacks 00:05:28.967 ************************************ 00:05:28.967 10:14:13 env -- common/autotest_common.sh@1142 -- # return 0 00:05:28.967 00:05:28.967 real 0m6.101s 00:05:28.967 user 0m4.246s 00:05:28.967 sys 0m0.932s 00:05:28.967 10:14:13 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.967 10:14:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.967 ************************************ 00:05:28.967 END TEST env 00:05:28.967 ************************************ 00:05:28.967 10:14:13 -- common/autotest_common.sh@1142 -- # return 0 00:05:28.967 10:14:13 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:28.967 10:14:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.967 10:14:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.967 10:14:13 -- common/autotest_common.sh@10 -- # set +x 00:05:29.226 ************************************ 00:05:29.226 START TEST rpc 00:05:29.226 ************************************ 00:05:29.226 10:14:13 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:29.226 * Looking for test storage... 00:05:29.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:29.226 10:14:14 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2204567 00:05:29.226 10:14:14 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.226 10:14:14 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:29.226 10:14:14 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2204567 00:05:29.226 10:14:14 rpc -- common/autotest_common.sh@829 -- # '[' -z 2204567 ']' 00:05:29.226 10:14:14 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.226 10:14:14 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.226 10:14:14 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.226 10:14:14 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.226 10:14:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.226 [2024-07-14 10:14:14.109679] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:29.226 [2024-07-14 10:14:14.109725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2204567 ] 00:05:29.226 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.226 [2024-07-14 10:14:14.174239] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.485 [2024-07-14 10:14:14.214832] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:29.485 [2024-07-14 10:14:14.214869] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2204567' to capture a snapshot of events at runtime. 00:05:29.485 [2024-07-14 10:14:14.214876] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:29.485 [2024-07-14 10:14:14.214882] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:29.485 [2024-07-14 10:14:14.214887] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2204567 for offline analysis/debug. 00:05:29.485 [2024-07-14 10:14:14.214921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.485 10:14:14 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.485 10:14:14 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:29.485 10:14:14 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:29.485 10:14:14 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:29.485 10:14:14 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:29.485 10:14:14 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:29.485 10:14:14 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.485 10:14:14 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.485 10:14:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.485 ************************************ 00:05:29.485 START TEST rpc_integrity 00:05:29.485 ************************************ 00:05:29.485 10:14:14 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:29.485 10:14:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:29.485 10:14:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.485 10:14:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.485 10:14:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.485 10:14:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:29.485 10:14:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:29.744 10:14:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:29.744 10:14:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:29.744 10:14:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.744 10:14:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.744 10:14:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.744 10:14:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:29.744 10:14:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:29.744 10:14:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.744 10:14:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.744 10:14:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.744 10:14:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:29.744 { 00:05:29.744 "name": "Malloc0", 00:05:29.744 "aliases": [ 00:05:29.744 "83de67ec-0040-4219-ac1b-e1e4d320edc3" 00:05:29.744 ], 00:05:29.744 "product_name": "Malloc disk", 00:05:29.744 "block_size": 512, 00:05:29.744 "num_blocks": 16384, 00:05:29.744 "uuid": "83de67ec-0040-4219-ac1b-e1e4d320edc3", 00:05:29.744 "assigned_rate_limits": { 00:05:29.744 "rw_ios_per_sec": 0, 00:05:29.744 "rw_mbytes_per_sec": 0, 00:05:29.744 "r_mbytes_per_sec": 0, 00:05:29.744 "w_mbytes_per_sec": 0 00:05:29.744 }, 00:05:29.744 "claimed": false, 00:05:29.744 "zoned": false, 00:05:29.744 "supported_io_types": { 00:05:29.744 "read": true, 00:05:29.744 "write": true, 00:05:29.744 "unmap": true, 00:05:29.744 "flush": true, 00:05:29.744 "reset": true, 00:05:29.744 "nvme_admin": false, 00:05:29.744 "nvme_io": false, 00:05:29.744 "nvme_io_md": false, 00:05:29.744 "write_zeroes": true, 00:05:29.744 "zcopy": true, 00:05:29.744 "get_zone_info": false, 00:05:29.744 "zone_management": false, 00:05:29.744 "zone_append": false, 00:05:29.744 "compare": false, 00:05:29.744 "compare_and_write": false, 00:05:29.744 "abort": true, 00:05:29.744 "seek_hole": false, 00:05:29.744 "seek_data": false, 00:05:29.744 "copy": true, 00:05:29.744 "nvme_iov_md": false 00:05:29.744 }, 00:05:29.744 "memory_domains": [ 00:05:29.744 { 00:05:29.744 "dma_device_id": "system", 00:05:29.744 "dma_device_type": 1 00:05:29.744 }, 00:05:29.744 { 00:05:29.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.744 "dma_device_type": 2 00:05:29.744 } 00:05:29.744 ], 00:05:29.744 "driver_specific": {} 00:05:29.744 } 00:05:29.744 ]' 00:05:29.744 10:14:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:29.744 10:14:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:29.744 10:14:14 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:29.744 10:14:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.744 10:14:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.744 [2024-07-14 10:14:14.564013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:29.744 [2024-07-14 10:14:14.564039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:29.744 [2024-07-14 10:14:14.564050] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20fcc60 00:05:29.744 [2024-07-14 10:14:14.564057] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:29.744 [2024-07-14 10:14:14.565121] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:29.744 [2024-07-14 10:14:14.565141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:29.744 Passthru0 00:05:29.744 10:14:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.744 10:14:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:29.744 10:14:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.744 10:14:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.744 10:14:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.744 10:14:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:29.744 { 00:05:29.744 "name": "Malloc0", 00:05:29.744 "aliases": [ 00:05:29.744 "83de67ec-0040-4219-ac1b-e1e4d320edc3" 00:05:29.744 ], 00:05:29.744 "product_name": "Malloc disk", 00:05:29.744 "block_size": 512, 00:05:29.744 "num_blocks": 16384, 00:05:29.744 "uuid": "83de67ec-0040-4219-ac1b-e1e4d320edc3", 00:05:29.744 "assigned_rate_limits": { 00:05:29.744 "rw_ios_per_sec": 0, 00:05:29.744 "rw_mbytes_per_sec": 0, 00:05:29.744 "r_mbytes_per_sec": 0, 00:05:29.744 "w_mbytes_per_sec": 0 00:05:29.744 }, 00:05:29.744 "claimed": true, 00:05:29.744 "claim_type": "exclusive_write", 00:05:29.744 "zoned": false, 00:05:29.744 "supported_io_types": { 00:05:29.744 "read": true, 00:05:29.744 "write": true, 00:05:29.744 "unmap": true, 00:05:29.744 "flush": true, 00:05:29.744 "reset": true, 00:05:29.744 "nvme_admin": false, 00:05:29.744 "nvme_io": false, 00:05:29.744 "nvme_io_md": false, 00:05:29.744 "write_zeroes": true, 00:05:29.744 "zcopy": true, 00:05:29.744 "get_zone_info": false, 00:05:29.744 "zone_management": false, 00:05:29.744 "zone_append": false, 00:05:29.744 "compare": false, 00:05:29.744 "compare_and_write": false, 00:05:29.744 "abort": true, 00:05:29.744 "seek_hole": false, 00:05:29.744 "seek_data": false, 00:05:29.744 "copy": true, 00:05:29.744 "nvme_iov_md": false 00:05:29.744 }, 00:05:29.744 "memory_domains": [ 00:05:29.744 { 00:05:29.744 "dma_device_id": "system", 00:05:29.744 "dma_device_type": 1 00:05:29.744 }, 00:05:29.744 { 00:05:29.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.744 "dma_device_type": 2 00:05:29.744 } 00:05:29.744 ], 00:05:29.744 "driver_specific": {} 00:05:29.744 }, 00:05:29.744 { 00:05:29.744 "name": "Passthru0", 00:05:29.744 "aliases": [ 00:05:29.744 "d3111f30-0b3b-5bde-bf2d-cc25d907763d" 00:05:29.744 ], 00:05:29.744 "product_name": "passthru", 00:05:29.744 "block_size": 512, 00:05:29.744 "num_blocks": 16384, 00:05:29.744 "uuid": "d3111f30-0b3b-5bde-bf2d-cc25d907763d", 00:05:29.744 "assigned_rate_limits": { 00:05:29.744 "rw_ios_per_sec": 0, 00:05:29.744 "rw_mbytes_per_sec": 0, 00:05:29.744 "r_mbytes_per_sec": 0, 00:05:29.744 "w_mbytes_per_sec": 0 00:05:29.744 }, 00:05:29.745 "claimed": false, 00:05:29.745 "zoned": false, 00:05:29.745 "supported_io_types": { 00:05:29.745 "read": true, 00:05:29.745 "write": true, 00:05:29.745 "unmap": true, 00:05:29.745 "flush": true, 00:05:29.745 "reset": true, 00:05:29.745 "nvme_admin": false, 00:05:29.745 "nvme_io": false, 00:05:29.745 "nvme_io_md": false, 00:05:29.745 "write_zeroes": true, 00:05:29.745 "zcopy": true, 00:05:29.745 "get_zone_info": false, 00:05:29.745 "zone_management": false, 00:05:29.745 "zone_append": false, 00:05:29.745 "compare": false, 00:05:29.745 "compare_and_write": false, 00:05:29.745 "abort": true, 00:05:29.745 "seek_hole": false, 00:05:29.745 "seek_data": false, 00:05:29.745 "copy": true, 00:05:29.745 "nvme_iov_md": false 00:05:29.745 }, 00:05:29.745 "memory_domains": [ 00:05:29.745 { 00:05:29.745 "dma_device_id": "system", 00:05:29.745 "dma_device_type": 1 00:05:29.745 }, 00:05:29.745 { 00:05:29.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.745 "dma_device_type": 2 00:05:29.745 } 00:05:29.745 ], 00:05:29.745 "driver_specific": { 00:05:29.745 "passthru": { 00:05:29.745 "name": "Passthru0", 00:05:29.745 "base_bdev_name": "Malloc0" 00:05:29.745 } 00:05:29.745 } 00:05:29.745 } 00:05:29.745 ]' 00:05:29.745 10:14:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:29.745 10:14:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:29.745 10:14:14 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:29.745 10:14:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.745 10:14:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.745 10:14:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.745 10:14:14 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:29.745 10:14:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.745 10:14:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.745 10:14:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.745 10:14:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:29.745 10:14:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.745 10:14:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.745 10:14:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.745 10:14:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:29.745 10:14:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:29.745 10:14:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:29.745 00:05:29.745 real 0m0.268s 00:05:29.745 user 0m0.167s 00:05:29.745 sys 0m0.037s 00:05:29.745 10:14:14 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.745 10:14:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.745 ************************************ 00:05:29.745 END TEST rpc_integrity 00:05:29.745 ************************************ 00:05:30.003 10:14:14 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:30.003 10:14:14 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:30.003 10:14:14 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.003 10:14:14 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.003 10:14:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.003 ************************************ 00:05:30.003 START TEST rpc_plugins 00:05:30.003 ************************************ 00:05:30.003 10:14:14 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:30.003 10:14:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:30.003 10:14:14 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.003 10:14:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.004 10:14:14 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.004 10:14:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:30.004 10:14:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:30.004 10:14:14 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.004 10:14:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.004 10:14:14 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.004 10:14:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:30.004 { 00:05:30.004 "name": "Malloc1", 00:05:30.004 "aliases": [ 00:05:30.004 "1c89fc44-0e6d-4bef-89b4-cb82b596e865" 00:05:30.004 ], 00:05:30.004 "product_name": "Malloc disk", 00:05:30.004 "block_size": 4096, 00:05:30.004 "num_blocks": 256, 00:05:30.004 "uuid": "1c89fc44-0e6d-4bef-89b4-cb82b596e865", 00:05:30.004 "assigned_rate_limits": { 00:05:30.004 "rw_ios_per_sec": 0, 00:05:30.004 "rw_mbytes_per_sec": 0, 00:05:30.004 "r_mbytes_per_sec": 0, 00:05:30.004 "w_mbytes_per_sec": 0 00:05:30.004 }, 00:05:30.004 "claimed": false, 00:05:30.004 "zoned": false, 00:05:30.004 "supported_io_types": { 00:05:30.004 "read": true, 00:05:30.004 "write": true, 00:05:30.004 "unmap": true, 00:05:30.004 "flush": true, 00:05:30.004 "reset": true, 00:05:30.004 "nvme_admin": false, 00:05:30.004 "nvme_io": false, 00:05:30.004 "nvme_io_md": false, 00:05:30.004 "write_zeroes": true, 00:05:30.004 "zcopy": true, 00:05:30.004 "get_zone_info": false, 00:05:30.004 "zone_management": false, 00:05:30.004 "zone_append": false, 00:05:30.004 "compare": false, 00:05:30.004 "compare_and_write": false, 00:05:30.004 "abort": true, 00:05:30.004 "seek_hole": false, 00:05:30.004 "seek_data": false, 00:05:30.004 "copy": true, 00:05:30.004 "nvme_iov_md": false 00:05:30.004 }, 00:05:30.004 "memory_domains": [ 00:05:30.004 { 00:05:30.004 "dma_device_id": "system", 00:05:30.004 "dma_device_type": 1 00:05:30.004 }, 00:05:30.004 { 00:05:30.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.004 "dma_device_type": 2 00:05:30.004 } 00:05:30.004 ], 00:05:30.004 "driver_specific": {} 00:05:30.004 } 00:05:30.004 ]' 00:05:30.004 10:14:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:30.004 10:14:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:30.004 10:14:14 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:30.004 10:14:14 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.004 10:14:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.004 10:14:14 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.004 10:14:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:30.004 10:14:14 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.004 10:14:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.004 10:14:14 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.004 10:14:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:30.004 10:14:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:30.004 10:14:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:30.004 00:05:30.004 real 0m0.142s 00:05:30.004 user 0m0.087s 00:05:30.004 sys 0m0.019s 00:05:30.004 10:14:14 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.004 10:14:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.004 ************************************ 00:05:30.004 END TEST rpc_plugins 00:05:30.004 ************************************ 00:05:30.004 10:14:14 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:30.004 10:14:14 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:30.004 10:14:14 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.004 10:14:14 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.004 10:14:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.004 ************************************ 00:05:30.004 START TEST rpc_trace_cmd_test 00:05:30.004 ************************************ 00:05:30.004 10:14:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:30.004 10:14:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:30.004 10:14:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:30.004 10:14:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.004 10:14:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:30.263 10:14:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.263 10:14:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:30.263 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2204567", 00:05:30.263 "tpoint_group_mask": "0x8", 00:05:30.263 "iscsi_conn": { 00:05:30.263 "mask": "0x2", 00:05:30.263 "tpoint_mask": "0x0" 00:05:30.263 }, 00:05:30.263 "scsi": { 00:05:30.263 "mask": "0x4", 00:05:30.263 "tpoint_mask": "0x0" 00:05:30.263 }, 00:05:30.263 "bdev": { 00:05:30.263 "mask": "0x8", 00:05:30.263 "tpoint_mask": "0xffffffffffffffff" 00:05:30.263 }, 00:05:30.263 "nvmf_rdma": { 00:05:30.263 "mask": "0x10", 00:05:30.263 "tpoint_mask": "0x0" 00:05:30.263 }, 00:05:30.263 "nvmf_tcp": { 00:05:30.263 "mask": "0x20", 00:05:30.263 "tpoint_mask": "0x0" 00:05:30.263 }, 00:05:30.263 "ftl": { 00:05:30.263 "mask": "0x40", 00:05:30.263 "tpoint_mask": "0x0" 00:05:30.263 }, 00:05:30.263 "blobfs": { 00:05:30.263 "mask": "0x80", 00:05:30.263 "tpoint_mask": "0x0" 00:05:30.263 }, 00:05:30.263 "dsa": { 00:05:30.263 "mask": "0x200", 00:05:30.263 "tpoint_mask": "0x0" 00:05:30.263 }, 00:05:30.263 "thread": { 00:05:30.263 "mask": "0x400", 00:05:30.263 "tpoint_mask": "0x0" 00:05:30.263 }, 00:05:30.263 "nvme_pcie": { 00:05:30.263 "mask": "0x800", 00:05:30.263 "tpoint_mask": "0x0" 00:05:30.263 }, 00:05:30.263 "iaa": { 00:05:30.263 "mask": "0x1000", 00:05:30.263 "tpoint_mask": "0x0" 00:05:30.263 }, 00:05:30.263 "nvme_tcp": { 00:05:30.263 "mask": "0x2000", 00:05:30.263 "tpoint_mask": "0x0" 00:05:30.263 }, 00:05:30.263 "bdev_nvme": { 00:05:30.263 "mask": "0x4000", 00:05:30.263 "tpoint_mask": "0x0" 00:05:30.263 }, 00:05:30.263 "sock": { 00:05:30.263 "mask": "0x8000", 00:05:30.263 "tpoint_mask": "0x0" 00:05:30.263 } 00:05:30.263 }' 00:05:30.263 10:14:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:30.263 10:14:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:30.263 10:14:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:30.263 10:14:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:30.263 10:14:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:30.263 10:14:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:30.263 10:14:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:30.263 10:14:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:30.263 10:14:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:30.263 10:14:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:30.263 00:05:30.263 real 0m0.221s 00:05:30.263 user 0m0.183s 00:05:30.263 sys 0m0.028s 00:05:30.263 10:14:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.263 10:14:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:30.263 ************************************ 00:05:30.263 END TEST rpc_trace_cmd_test 00:05:30.263 ************************************ 00:05:30.263 10:14:15 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:30.263 10:14:15 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:30.263 10:14:15 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:30.263 10:14:15 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:30.263 10:14:15 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.263 10:14:15 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.263 10:14:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.522 ************************************ 00:05:30.522 START TEST rpc_daemon_integrity 00:05:30.522 ************************************ 00:05:30.522 10:14:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:30.522 10:14:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:30.522 10:14:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.522 10:14:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.522 10:14:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.522 10:14:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:30.522 10:14:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:30.522 10:14:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:30.522 10:14:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:30.522 10:14:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.522 10:14:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.522 10:14:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.522 10:14:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:30.522 10:14:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:30.522 10:14:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.522 10:14:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.522 10:14:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.522 10:14:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:30.522 { 00:05:30.522 "name": "Malloc2", 00:05:30.522 "aliases": [ 00:05:30.522 "dbd463e2-36ea-4b5c-a1d2-f110a590a2d3" 00:05:30.522 ], 00:05:30.522 "product_name": "Malloc disk", 00:05:30.522 "block_size": 512, 00:05:30.522 "num_blocks": 16384, 00:05:30.522 "uuid": "dbd463e2-36ea-4b5c-a1d2-f110a590a2d3", 00:05:30.522 "assigned_rate_limits": { 00:05:30.522 "rw_ios_per_sec": 0, 00:05:30.522 "rw_mbytes_per_sec": 0, 00:05:30.522 "r_mbytes_per_sec": 0, 00:05:30.522 "w_mbytes_per_sec": 0 00:05:30.522 }, 00:05:30.522 "claimed": false, 00:05:30.522 "zoned": false, 00:05:30.522 "supported_io_types": { 00:05:30.522 "read": true, 00:05:30.522 "write": true, 00:05:30.522 "unmap": true, 00:05:30.522 "flush": true, 00:05:30.522 "reset": true, 00:05:30.522 "nvme_admin": false, 00:05:30.522 "nvme_io": false, 00:05:30.522 "nvme_io_md": false, 00:05:30.522 "write_zeroes": true, 00:05:30.522 "zcopy": true, 00:05:30.522 "get_zone_info": false, 00:05:30.522 "zone_management": false, 00:05:30.522 "zone_append": false, 00:05:30.522 "compare": false, 00:05:30.522 "compare_and_write": false, 00:05:30.522 "abort": true, 00:05:30.522 "seek_hole": false, 00:05:30.522 "seek_data": false, 00:05:30.522 "copy": true, 00:05:30.522 "nvme_iov_md": false 00:05:30.522 }, 00:05:30.522 "memory_domains": [ 00:05:30.522 { 00:05:30.522 "dma_device_id": "system", 00:05:30.522 "dma_device_type": 1 00:05:30.522 }, 00:05:30.522 { 00:05:30.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.522 "dma_device_type": 2 00:05:30.522 } 00:05:30.522 ], 00:05:30.522 "driver_specific": {} 00:05:30.522 } 00:05:30.522 ]' 00:05:30.522 10:14:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:30.522 10:14:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:30.522 10:14:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:30.522 10:14:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.522 10:14:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.522 [2024-07-14 10:14:15.386255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:30.522 [2024-07-14 10:14:15.386281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:30.522 [2024-07-14 10:14:15.386293] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22ae470 00:05:30.522 [2024-07-14 10:14:15.386299] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:30.522 [2024-07-14 10:14:15.387248] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:30.522 [2024-07-14 10:14:15.387267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:30.522 Passthru0 00:05:30.522 10:14:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.522 10:14:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:30.522 10:14:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.522 10:14:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.522 10:14:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.522 10:14:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:30.522 { 00:05:30.522 "name": "Malloc2", 00:05:30.522 "aliases": [ 00:05:30.522 "dbd463e2-36ea-4b5c-a1d2-f110a590a2d3" 00:05:30.522 ], 00:05:30.522 "product_name": "Malloc disk", 00:05:30.522 "block_size": 512, 00:05:30.522 "num_blocks": 16384, 00:05:30.522 "uuid": "dbd463e2-36ea-4b5c-a1d2-f110a590a2d3", 00:05:30.522 "assigned_rate_limits": { 00:05:30.522 "rw_ios_per_sec": 0, 00:05:30.522 "rw_mbytes_per_sec": 0, 00:05:30.522 "r_mbytes_per_sec": 0, 00:05:30.522 "w_mbytes_per_sec": 0 00:05:30.522 }, 00:05:30.522 "claimed": true, 00:05:30.522 "claim_type": "exclusive_write", 00:05:30.522 "zoned": false, 00:05:30.522 "supported_io_types": { 00:05:30.522 "read": true, 00:05:30.522 "write": true, 00:05:30.522 "unmap": true, 00:05:30.522 "flush": true, 00:05:30.522 "reset": true, 00:05:30.522 "nvme_admin": false, 00:05:30.522 "nvme_io": false, 00:05:30.522 "nvme_io_md": false, 00:05:30.522 "write_zeroes": true, 00:05:30.522 "zcopy": true, 00:05:30.522 "get_zone_info": false, 00:05:30.522 "zone_management": false, 00:05:30.522 "zone_append": false, 00:05:30.522 "compare": false, 00:05:30.522 "compare_and_write": false, 00:05:30.522 "abort": true, 00:05:30.522 "seek_hole": false, 00:05:30.522 "seek_data": false, 00:05:30.522 "copy": true, 00:05:30.522 "nvme_iov_md": false 00:05:30.522 }, 00:05:30.522 "memory_domains": [ 00:05:30.522 { 00:05:30.522 "dma_device_id": "system", 00:05:30.522 "dma_device_type": 1 00:05:30.522 }, 00:05:30.522 { 00:05:30.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.522 "dma_device_type": 2 00:05:30.522 } 00:05:30.522 ], 00:05:30.522 "driver_specific": {} 00:05:30.522 }, 00:05:30.522 { 00:05:30.522 "name": "Passthru0", 00:05:30.523 "aliases": [ 00:05:30.523 "664edd8c-71a9-57e6-aed3-07fd0c5a8ae9" 00:05:30.523 ], 00:05:30.523 "product_name": "passthru", 00:05:30.523 "block_size": 512, 00:05:30.523 "num_blocks": 16384, 00:05:30.523 "uuid": "664edd8c-71a9-57e6-aed3-07fd0c5a8ae9", 00:05:30.523 "assigned_rate_limits": { 00:05:30.523 "rw_ios_per_sec": 0, 00:05:30.523 "rw_mbytes_per_sec": 0, 00:05:30.523 "r_mbytes_per_sec": 0, 00:05:30.523 "w_mbytes_per_sec": 0 00:05:30.523 }, 00:05:30.523 "claimed": false, 00:05:30.523 "zoned": false, 00:05:30.523 "supported_io_types": { 00:05:30.523 "read": true, 00:05:30.523 "write": true, 00:05:30.523 "unmap": true, 00:05:30.523 "flush": true, 00:05:30.523 "reset": true, 00:05:30.523 "nvme_admin": false, 00:05:30.523 "nvme_io": false, 00:05:30.523 "nvme_io_md": false, 00:05:30.523 "write_zeroes": true, 00:05:30.523 "zcopy": true, 00:05:30.523 "get_zone_info": false, 00:05:30.523 "zone_management": false, 00:05:30.523 "zone_append": false, 00:05:30.523 "compare": false, 00:05:30.523 "compare_and_write": false, 00:05:30.523 "abort": true, 00:05:30.523 "seek_hole": false, 00:05:30.523 "seek_data": false, 00:05:30.523 "copy": true, 00:05:30.523 "nvme_iov_md": false 00:05:30.523 }, 00:05:30.523 "memory_domains": [ 00:05:30.523 { 00:05:30.523 "dma_device_id": "system", 00:05:30.523 "dma_device_type": 1 00:05:30.523 }, 00:05:30.523 { 00:05:30.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.523 "dma_device_type": 2 00:05:30.523 } 00:05:30.523 ], 00:05:30.523 "driver_specific": { 00:05:30.523 "passthru": { 00:05:30.523 "name": "Passthru0", 00:05:30.523 "base_bdev_name": "Malloc2" 00:05:30.523 } 00:05:30.523 } 00:05:30.523 } 00:05:30.523 ]' 00:05:30.523 10:14:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:30.523 10:14:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:30.523 10:14:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:30.523 10:14:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.523 10:14:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.523 10:14:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.523 10:14:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:30.523 10:14:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.523 10:14:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.523 10:14:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.523 10:14:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:30.523 10:14:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.523 10:14:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.523 10:14:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.523 10:14:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:30.523 10:14:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:30.782 10:14:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:30.782 00:05:30.782 real 0m0.270s 00:05:30.782 user 0m0.162s 00:05:30.782 sys 0m0.043s 00:05:30.782 10:14:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.782 10:14:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.782 ************************************ 00:05:30.782 END TEST rpc_daemon_integrity 00:05:30.782 ************************************ 00:05:30.782 10:14:15 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:30.782 10:14:15 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:30.782 10:14:15 rpc -- rpc/rpc.sh@84 -- # killprocess 2204567 00:05:30.782 10:14:15 rpc -- common/autotest_common.sh@948 -- # '[' -z 2204567 ']' 00:05:30.782 10:14:15 rpc -- common/autotest_common.sh@952 -- # kill -0 2204567 00:05:30.782 10:14:15 rpc -- common/autotest_common.sh@953 -- # uname 00:05:30.782 10:14:15 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:30.782 10:14:15 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2204567 00:05:30.782 10:14:15 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:30.782 10:14:15 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:30.782 10:14:15 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2204567' 00:05:30.782 killing process with pid 2204567 00:05:30.782 10:14:15 rpc -- common/autotest_common.sh@967 -- # kill 2204567 00:05:30.782 10:14:15 rpc -- common/autotest_common.sh@972 -- # wait 2204567 00:05:31.041 00:05:31.041 real 0m1.927s 00:05:31.041 user 0m2.466s 00:05:31.041 sys 0m0.663s 00:05:31.041 10:14:15 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.041 10:14:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.041 ************************************ 00:05:31.041 END TEST rpc 00:05:31.041 ************************************ 00:05:31.041 10:14:15 -- common/autotest_common.sh@1142 -- # return 0 00:05:31.041 10:14:15 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:31.041 10:14:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.041 10:14:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.041 10:14:15 -- common/autotest_common.sh@10 -- # set +x 00:05:31.041 ************************************ 00:05:31.041 START TEST skip_rpc 00:05:31.041 ************************************ 00:05:31.041 10:14:15 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:31.300 * Looking for test storage... 00:05:31.300 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:31.300 10:14:16 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:31.300 10:14:16 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:31.300 10:14:16 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:31.300 10:14:16 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.300 10:14:16 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.300 10:14:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.300 ************************************ 00:05:31.300 START TEST skip_rpc 00:05:31.300 ************************************ 00:05:31.300 10:14:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:31.300 10:14:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2204979 00:05:31.300 10:14:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.300 10:14:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:31.300 10:14:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:31.300 [2024-07-14 10:14:16.141535] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:31.300 [2024-07-14 10:14:16.141572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2204979 ] 00:05:31.301 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.301 [2024-07-14 10:14:16.206365] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.301 [2024-07-14 10:14:16.246678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.572 10:14:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:36.572 10:14:21 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:36.572 10:14:21 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:36.572 10:14:21 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:36.572 10:14:21 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.572 10:14:21 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:36.572 10:14:21 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.572 10:14:21 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:36.572 10:14:21 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.572 10:14:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.572 10:14:21 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:36.572 10:14:21 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:36.572 10:14:21 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:36.572 10:14:21 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:36.572 10:14:21 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:36.572 10:14:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:36.572 10:14:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2204979 00:05:36.572 10:14:21 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2204979 ']' 00:05:36.572 10:14:21 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2204979 00:05:36.572 10:14:21 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:36.572 10:14:21 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:36.572 10:14:21 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2204979 00:05:36.572 10:14:21 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:36.572 10:14:21 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:36.572 10:14:21 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2204979' 00:05:36.572 killing process with pid 2204979 00:05:36.572 10:14:21 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2204979 00:05:36.572 10:14:21 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2204979 00:05:36.572 00:05:36.572 real 0m5.351s 00:05:36.572 user 0m5.111s 00:05:36.572 sys 0m0.264s 00:05:36.572 10:14:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.572 10:14:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.572 ************************************ 00:05:36.572 END TEST skip_rpc 00:05:36.572 ************************************ 00:05:36.572 10:14:21 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:36.572 10:14:21 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:36.572 10:14:21 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.573 10:14:21 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.573 10:14:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.573 ************************************ 00:05:36.573 START TEST skip_rpc_with_json 00:05:36.573 ************************************ 00:05:36.573 10:14:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:36.573 10:14:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:36.573 10:14:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2205923 00:05:36.573 10:14:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.573 10:14:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.573 10:14:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2205923 00:05:36.573 10:14:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2205923 ']' 00:05:36.573 10:14:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.573 10:14:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.573 10:14:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.573 10:14:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.573 10:14:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:36.831 [2024-07-14 10:14:21.556089] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:36.831 [2024-07-14 10:14:21.556127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2205923 ] 00:05:36.831 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.831 [2024-07-14 10:14:21.624602] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.831 [2024-07-14 10:14:21.665577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.091 10:14:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.091 10:14:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:37.091 10:14:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:37.091 10:14:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.091 10:14:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.091 [2024-07-14 10:14:21.855557] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:37.091 request: 00:05:37.091 { 00:05:37.091 "trtype": "tcp", 00:05:37.091 "method": "nvmf_get_transports", 00:05:37.091 "req_id": 1 00:05:37.091 } 00:05:37.091 Got JSON-RPC error response 00:05:37.091 response: 00:05:37.091 { 00:05:37.091 "code": -19, 00:05:37.091 "message": "No such device" 00:05:37.091 } 00:05:37.091 10:14:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:37.091 10:14:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:37.091 10:14:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.091 10:14:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.091 [2024-07-14 10:14:21.867659] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:37.091 10:14:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.091 10:14:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:37.091 10:14:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.091 10:14:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.091 10:14:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.091 10:14:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:37.091 { 00:05:37.091 "subsystems": [ 00:05:37.091 { 00:05:37.091 "subsystem": "vfio_user_target", 00:05:37.091 "config": null 00:05:37.091 }, 00:05:37.091 { 00:05:37.091 "subsystem": "keyring", 00:05:37.091 "config": [] 00:05:37.091 }, 00:05:37.091 { 00:05:37.091 "subsystem": "iobuf", 00:05:37.091 "config": [ 00:05:37.091 { 00:05:37.091 "method": "iobuf_set_options", 00:05:37.091 "params": { 00:05:37.091 "small_pool_count": 8192, 00:05:37.091 "large_pool_count": 1024, 00:05:37.091 "small_bufsize": 8192, 00:05:37.091 "large_bufsize": 135168 00:05:37.091 } 00:05:37.091 } 00:05:37.091 ] 00:05:37.091 }, 00:05:37.091 { 00:05:37.092 "subsystem": "sock", 00:05:37.092 "config": [ 00:05:37.092 { 00:05:37.092 "method": "sock_set_default_impl", 00:05:37.092 "params": { 00:05:37.092 "impl_name": "posix" 00:05:37.092 } 00:05:37.092 }, 00:05:37.092 { 00:05:37.092 "method": "sock_impl_set_options", 00:05:37.092 "params": { 00:05:37.092 "impl_name": "ssl", 00:05:37.092 "recv_buf_size": 4096, 00:05:37.092 "send_buf_size": 4096, 00:05:37.092 "enable_recv_pipe": true, 00:05:37.092 "enable_quickack": false, 00:05:37.092 "enable_placement_id": 0, 00:05:37.092 "enable_zerocopy_send_server": true, 00:05:37.092 "enable_zerocopy_send_client": false, 00:05:37.092 "zerocopy_threshold": 0, 00:05:37.092 "tls_version": 0, 00:05:37.092 "enable_ktls": false 00:05:37.092 } 00:05:37.092 }, 00:05:37.092 { 00:05:37.092 "method": "sock_impl_set_options", 00:05:37.092 "params": { 00:05:37.092 "impl_name": "posix", 00:05:37.092 "recv_buf_size": 2097152, 00:05:37.092 "send_buf_size": 2097152, 00:05:37.092 "enable_recv_pipe": true, 00:05:37.092 "enable_quickack": false, 00:05:37.092 "enable_placement_id": 0, 00:05:37.092 "enable_zerocopy_send_server": true, 00:05:37.092 "enable_zerocopy_send_client": false, 00:05:37.092 "zerocopy_threshold": 0, 00:05:37.092 "tls_version": 0, 00:05:37.092 "enable_ktls": false 00:05:37.092 } 00:05:37.092 } 00:05:37.092 ] 00:05:37.092 }, 00:05:37.092 { 00:05:37.092 "subsystem": "vmd", 00:05:37.092 "config": [] 00:05:37.092 }, 00:05:37.092 { 00:05:37.092 "subsystem": "accel", 00:05:37.092 "config": [ 00:05:37.092 { 00:05:37.092 "method": "accel_set_options", 00:05:37.092 "params": { 00:05:37.092 "small_cache_size": 128, 00:05:37.092 "large_cache_size": 16, 00:05:37.092 "task_count": 2048, 00:05:37.092 "sequence_count": 2048, 00:05:37.092 "buf_count": 2048 00:05:37.092 } 00:05:37.092 } 00:05:37.092 ] 00:05:37.092 }, 00:05:37.092 { 00:05:37.092 "subsystem": "bdev", 00:05:37.092 "config": [ 00:05:37.092 { 00:05:37.092 "method": "bdev_set_options", 00:05:37.092 "params": { 00:05:37.092 "bdev_io_pool_size": 65535, 00:05:37.092 "bdev_io_cache_size": 256, 00:05:37.092 "bdev_auto_examine": true, 00:05:37.092 "iobuf_small_cache_size": 128, 00:05:37.092 "iobuf_large_cache_size": 16 00:05:37.092 } 00:05:37.092 }, 00:05:37.092 { 00:05:37.092 "method": "bdev_raid_set_options", 00:05:37.092 "params": { 00:05:37.092 "process_window_size_kb": 1024 00:05:37.092 } 00:05:37.092 }, 00:05:37.092 { 00:05:37.092 "method": "bdev_iscsi_set_options", 00:05:37.092 "params": { 00:05:37.092 "timeout_sec": 30 00:05:37.092 } 00:05:37.092 }, 00:05:37.092 { 00:05:37.092 "method": "bdev_nvme_set_options", 00:05:37.092 "params": { 00:05:37.092 "action_on_timeout": "none", 00:05:37.092 "timeout_us": 0, 00:05:37.092 "timeout_admin_us": 0, 00:05:37.092 "keep_alive_timeout_ms": 10000, 00:05:37.092 "arbitration_burst": 0, 00:05:37.092 "low_priority_weight": 0, 00:05:37.092 "medium_priority_weight": 0, 00:05:37.092 "high_priority_weight": 0, 00:05:37.092 "nvme_adminq_poll_period_us": 10000, 00:05:37.092 "nvme_ioq_poll_period_us": 0, 00:05:37.092 "io_queue_requests": 0, 00:05:37.092 "delay_cmd_submit": true, 00:05:37.092 "transport_retry_count": 4, 00:05:37.092 "bdev_retry_count": 3, 00:05:37.092 "transport_ack_timeout": 0, 00:05:37.092 "ctrlr_loss_timeout_sec": 0, 00:05:37.092 "reconnect_delay_sec": 0, 00:05:37.092 "fast_io_fail_timeout_sec": 0, 00:05:37.092 "disable_auto_failback": false, 00:05:37.092 "generate_uuids": false, 00:05:37.092 "transport_tos": 0, 00:05:37.092 "nvme_error_stat": false, 00:05:37.092 "rdma_srq_size": 0, 00:05:37.092 "io_path_stat": false, 00:05:37.092 "allow_accel_sequence": false, 00:05:37.092 "rdma_max_cq_size": 0, 00:05:37.092 "rdma_cm_event_timeout_ms": 0, 00:05:37.092 "dhchap_digests": [ 00:05:37.092 "sha256", 00:05:37.092 "sha384", 00:05:37.092 "sha512" 00:05:37.092 ], 00:05:37.092 "dhchap_dhgroups": [ 00:05:37.092 "null", 00:05:37.092 "ffdhe2048", 00:05:37.092 "ffdhe3072", 00:05:37.092 "ffdhe4096", 00:05:37.092 "ffdhe6144", 00:05:37.092 "ffdhe8192" 00:05:37.092 ] 00:05:37.092 } 00:05:37.092 }, 00:05:37.092 { 00:05:37.092 "method": "bdev_nvme_set_hotplug", 00:05:37.092 "params": { 00:05:37.092 "period_us": 100000, 00:05:37.092 "enable": false 00:05:37.092 } 00:05:37.092 }, 00:05:37.092 { 00:05:37.092 "method": "bdev_wait_for_examine" 00:05:37.092 } 00:05:37.092 ] 00:05:37.092 }, 00:05:37.092 { 00:05:37.092 "subsystem": "scsi", 00:05:37.092 "config": null 00:05:37.092 }, 00:05:37.092 { 00:05:37.092 "subsystem": "scheduler", 00:05:37.092 "config": [ 00:05:37.092 { 00:05:37.092 "method": "framework_set_scheduler", 00:05:37.092 "params": { 00:05:37.092 "name": "static" 00:05:37.092 } 00:05:37.092 } 00:05:37.092 ] 00:05:37.092 }, 00:05:37.092 { 00:05:37.092 "subsystem": "vhost_scsi", 00:05:37.092 "config": [] 00:05:37.092 }, 00:05:37.092 { 00:05:37.092 "subsystem": "vhost_blk", 00:05:37.092 "config": [] 00:05:37.092 }, 00:05:37.092 { 00:05:37.092 "subsystem": "ublk", 00:05:37.092 "config": [] 00:05:37.092 }, 00:05:37.092 { 00:05:37.092 "subsystem": "nbd", 00:05:37.092 "config": [] 00:05:37.092 }, 00:05:37.092 { 00:05:37.092 "subsystem": "nvmf", 00:05:37.092 "config": [ 00:05:37.092 { 00:05:37.092 "method": "nvmf_set_config", 00:05:37.092 "params": { 00:05:37.092 "discovery_filter": "match_any", 00:05:37.092 "admin_cmd_passthru": { 00:05:37.092 "identify_ctrlr": false 00:05:37.092 } 00:05:37.092 } 00:05:37.092 }, 00:05:37.092 { 00:05:37.092 "method": "nvmf_set_max_subsystems", 00:05:37.092 "params": { 00:05:37.092 "max_subsystems": 1024 00:05:37.092 } 00:05:37.092 }, 00:05:37.092 { 00:05:37.092 "method": "nvmf_set_crdt", 00:05:37.092 "params": { 00:05:37.092 "crdt1": 0, 00:05:37.092 "crdt2": 0, 00:05:37.092 "crdt3": 0 00:05:37.092 } 00:05:37.092 }, 00:05:37.092 { 00:05:37.092 "method": "nvmf_create_transport", 00:05:37.092 "params": { 00:05:37.092 "trtype": "TCP", 00:05:37.092 "max_queue_depth": 128, 00:05:37.092 "max_io_qpairs_per_ctrlr": 127, 00:05:37.092 "in_capsule_data_size": 4096, 00:05:37.092 "max_io_size": 131072, 00:05:37.092 "io_unit_size": 131072, 00:05:37.092 "max_aq_depth": 128, 00:05:37.092 "num_shared_buffers": 511, 00:05:37.092 "buf_cache_size": 4294967295, 00:05:37.092 "dif_insert_or_strip": false, 00:05:37.092 "zcopy": false, 00:05:37.092 "c2h_success": true, 00:05:37.092 "sock_priority": 0, 00:05:37.092 "abort_timeout_sec": 1, 00:05:37.092 "ack_timeout": 0, 00:05:37.092 "data_wr_pool_size": 0 00:05:37.092 } 00:05:37.092 } 00:05:37.092 ] 00:05:37.092 }, 00:05:37.092 { 00:05:37.092 "subsystem": "iscsi", 00:05:37.092 "config": [ 00:05:37.092 { 00:05:37.092 "method": "iscsi_set_options", 00:05:37.092 "params": { 00:05:37.092 "node_base": "iqn.2016-06.io.spdk", 00:05:37.092 "max_sessions": 128, 00:05:37.092 "max_connections_per_session": 2, 00:05:37.092 "max_queue_depth": 64, 00:05:37.092 "default_time2wait": 2, 00:05:37.092 "default_time2retain": 20, 00:05:37.092 "first_burst_length": 8192, 00:05:37.092 "immediate_data": true, 00:05:37.092 "allow_duplicated_isid": false, 00:05:37.092 "error_recovery_level": 0, 00:05:37.092 "nop_timeout": 60, 00:05:37.092 "nop_in_interval": 30, 00:05:37.092 "disable_chap": false, 00:05:37.092 "require_chap": false, 00:05:37.092 "mutual_chap": false, 00:05:37.092 "chap_group": 0, 00:05:37.092 "max_large_datain_per_connection": 64, 00:05:37.092 "max_r2t_per_connection": 4, 00:05:37.092 "pdu_pool_size": 36864, 00:05:37.092 "immediate_data_pool_size": 16384, 00:05:37.092 "data_out_pool_size": 2048 00:05:37.092 } 00:05:37.092 } 00:05:37.092 ] 00:05:37.092 } 00:05:37.092 ] 00:05:37.092 } 00:05:37.092 10:14:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:37.092 10:14:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2205923 00:05:37.092 10:14:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2205923 ']' 00:05:37.092 10:14:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2205923 00:05:37.092 10:14:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:37.092 10:14:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:37.092 10:14:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2205923 00:05:37.092 10:14:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:37.092 10:14:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:37.351 10:14:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2205923' 00:05:37.351 killing process with pid 2205923 00:05:37.351 10:14:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2205923 00:05:37.351 10:14:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2205923 00:05:37.608 10:14:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2206155 00:05:37.608 10:14:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:37.608 10:14:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2206155 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2206155 ']' 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2206155 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2206155 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2206155' 00:05:42.874 killing process with pid 2206155 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2206155 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2206155 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:42.874 00:05:42.874 real 0m6.218s 00:05:42.874 user 0m5.911s 00:05:42.874 sys 0m0.567s 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:42.874 ************************************ 00:05:42.874 END TEST skip_rpc_with_json 00:05:42.874 ************************************ 00:05:42.874 10:14:27 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:42.874 10:14:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:42.874 10:14:27 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.874 10:14:27 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.874 10:14:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.874 ************************************ 00:05:42.874 START TEST skip_rpc_with_delay 00:05:42.874 ************************************ 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:42.874 [2024-07-14 10:14:27.844367] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:42.874 [2024-07-14 10:14:27.844424] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:42.874 10:14:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:42.875 00:05:42.875 real 0m0.066s 00:05:42.875 user 0m0.047s 00:05:42.875 sys 0m0.019s 00:05:42.875 10:14:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.875 10:14:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:42.875 ************************************ 00:05:42.875 END TEST skip_rpc_with_delay 00:05:42.875 ************************************ 00:05:43.134 10:14:27 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:43.134 10:14:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:43.134 10:14:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:43.134 10:14:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:43.134 10:14:27 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.134 10:14:27 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.134 10:14:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.134 ************************************ 00:05:43.134 START TEST exit_on_failed_rpc_init 00:05:43.134 ************************************ 00:05:43.134 10:14:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:43.134 10:14:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2207130 00:05:43.134 10:14:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2207130 00:05:43.134 10:14:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.134 10:14:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2207130 ']' 00:05:43.134 10:14:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.134 10:14:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.134 10:14:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.134 10:14:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.134 10:14:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:43.134 [2024-07-14 10:14:27.977855] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:43.134 [2024-07-14 10:14:27.977894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2207130 ] 00:05:43.134 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.134 [2024-07-14 10:14:28.042336] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.134 [2024-07-14 10:14:28.082978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.393 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.393 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:43.393 10:14:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.393 10:14:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:43.393 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:43.393 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:43.394 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.394 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.394 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.394 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.394 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.394 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.394 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.394 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:43.394 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:43.394 [2024-07-14 10:14:28.321187] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:43.394 [2024-07-14 10:14:28.321239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2207141 ] 00:05:43.394 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.653 [2024-07-14 10:14:28.385344] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.653 [2024-07-14 10:14:28.425112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.653 [2024-07-14 10:14:28.425176] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:43.653 [2024-07-14 10:14:28.425185] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:43.653 [2024-07-14 10:14:28.425191] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:43.653 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:43.653 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:43.653 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:43.653 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:43.653 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:43.653 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:43.653 10:14:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:43.653 10:14:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2207130 00:05:43.653 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2207130 ']' 00:05:43.653 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2207130 00:05:43.653 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:43.653 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.653 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2207130 00:05:43.653 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:43.653 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:43.653 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2207130' 00:05:43.653 killing process with pid 2207130 00:05:43.653 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2207130 00:05:43.653 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2207130 00:05:43.913 00:05:43.913 real 0m0.907s 00:05:43.913 user 0m0.958s 00:05:43.913 sys 0m0.373s 00:05:43.913 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.913 10:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:43.913 ************************************ 00:05:43.913 END TEST exit_on_failed_rpc_init 00:05:43.913 ************************************ 00:05:43.913 10:14:28 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:43.913 10:14:28 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:43.913 00:05:43.913 real 0m12.904s 00:05:43.913 user 0m12.166s 00:05:43.913 sys 0m1.473s 00:05:43.913 10:14:28 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.913 10:14:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.913 ************************************ 00:05:43.913 END TEST skip_rpc 00:05:43.913 ************************************ 00:05:44.172 10:14:28 -- common/autotest_common.sh@1142 -- # return 0 00:05:44.172 10:14:28 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:44.172 10:14:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.172 10:14:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.172 10:14:28 -- common/autotest_common.sh@10 -- # set +x 00:05:44.172 ************************************ 00:05:44.172 START TEST rpc_client 00:05:44.172 ************************************ 00:05:44.172 10:14:28 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:44.172 * Looking for test storage... 00:05:44.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:44.172 10:14:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:44.172 OK 00:05:44.172 10:14:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:44.172 00:05:44.172 real 0m0.106s 00:05:44.172 user 0m0.043s 00:05:44.172 sys 0m0.071s 00:05:44.172 10:14:29 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.172 10:14:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:44.172 ************************************ 00:05:44.172 END TEST rpc_client 00:05:44.173 ************************************ 00:05:44.173 10:14:29 -- common/autotest_common.sh@1142 -- # return 0 00:05:44.173 10:14:29 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:44.173 10:14:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.173 10:14:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.173 10:14:29 -- common/autotest_common.sh@10 -- # set +x 00:05:44.173 ************************************ 00:05:44.173 START TEST json_config 00:05:44.173 ************************************ 00:05:44.173 10:14:29 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:44.433 10:14:29 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:44.433 10:14:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:44.433 10:14:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:44.433 10:14:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:44.433 10:14:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:44.433 10:14:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:44.433 10:14:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:44.433 10:14:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:44.433 10:14:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:44.433 10:14:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:44.433 10:14:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:44.433 10:14:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:44.433 10:14:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:44.433 10:14:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:44.433 10:14:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:44.433 10:14:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:44.433 10:14:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:44.433 10:14:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:44.433 10:14:29 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:44.433 10:14:29 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:44.433 10:14:29 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:44.433 10:14:29 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:44.433 10:14:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.433 10:14:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.433 10:14:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.433 10:14:29 json_config -- paths/export.sh@5 -- # export PATH 00:05:44.433 10:14:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.433 10:14:29 json_config -- nvmf/common.sh@47 -- # : 0 00:05:44.433 10:14:29 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:44.433 10:14:29 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:44.433 10:14:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:44.433 10:14:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:44.433 10:14:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:44.433 10:14:29 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:44.433 10:14:29 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:44.433 10:14:29 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:44.433 10:14:29 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:44.433 10:14:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:44.433 10:14:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:44.433 10:14:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:44.433 10:14:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:44.433 10:14:29 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:44.433 10:14:29 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:44.433 10:14:29 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:44.433 10:14:29 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:44.433 10:14:29 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:44.433 10:14:29 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:44.433 10:14:29 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:44.433 10:14:29 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:44.433 10:14:29 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:44.433 10:14:29 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:44.433 10:14:29 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:44.433 INFO: JSON configuration test init 00:05:44.433 10:14:29 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:44.433 10:14:29 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:44.433 10:14:29 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:44.433 10:14:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.433 10:14:29 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:44.433 10:14:29 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:44.433 10:14:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.433 10:14:29 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:44.433 10:14:29 json_config -- json_config/common.sh@9 -- # local app=target 00:05:44.433 10:14:29 json_config -- json_config/common.sh@10 -- # shift 00:05:44.433 10:14:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:44.433 10:14:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:44.433 10:14:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:44.433 10:14:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:44.433 10:14:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:44.433 10:14:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2207473 00:05:44.433 10:14:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:44.433 Waiting for target to run... 00:05:44.433 10:14:29 json_config -- json_config/common.sh@25 -- # waitforlisten 2207473 /var/tmp/spdk_tgt.sock 00:05:44.433 10:14:29 json_config -- common/autotest_common.sh@829 -- # '[' -z 2207473 ']' 00:05:44.433 10:14:29 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:44.433 10:14:29 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:44.433 10:14:29 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.433 10:14:29 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:44.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:44.434 10:14:29 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.434 10:14:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.434 [2024-07-14 10:14:29.268740] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:44.434 [2024-07-14 10:14:29.268782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2207473 ] 00:05:44.434 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.003 [2024-07-14 10:14:29.710358] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.003 [2024-07-14 10:14:29.743840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.296 10:14:30 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.296 10:14:30 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:45.296 10:14:30 json_config -- json_config/common.sh@26 -- # echo '' 00:05:45.296 00:05:45.296 10:14:30 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:45.296 10:14:30 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:45.296 10:14:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:45.296 10:14:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.296 10:14:30 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:45.296 10:14:30 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:45.296 10:14:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:45.296 10:14:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.296 10:14:30 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:45.296 10:14:30 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:45.296 10:14:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:48.585 10:14:33 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:48.585 10:14:33 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:48.585 10:14:33 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:48.585 10:14:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.585 10:14:33 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:48.585 10:14:33 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:48.585 10:14:33 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:48.585 10:14:33 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:48.585 10:14:33 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:48.585 10:14:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:48.585 10:14:33 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:48.585 10:14:33 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:48.585 10:14:33 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:48.585 10:14:33 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:48.585 10:14:33 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:48.585 10:14:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.585 10:14:33 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:48.585 10:14:33 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:48.585 10:14:33 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:48.585 10:14:33 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:48.585 10:14:33 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:48.585 10:14:33 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:48.585 10:14:33 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:48.585 10:14:33 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:48.585 10:14:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.585 10:14:33 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:48.585 10:14:33 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:48.585 10:14:33 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:48.585 10:14:33 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:48.585 10:14:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:48.585 MallocForNvmf0 00:05:48.843 10:14:33 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:48.843 10:14:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:48.843 MallocForNvmf1 00:05:48.843 10:14:33 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:48.843 10:14:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:49.102 [2024-07-14 10:14:33.921133] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:49.102 10:14:33 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:49.102 10:14:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:49.361 10:14:34 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:49.361 10:14:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:49.361 10:14:34 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:49.361 10:14:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:49.621 10:14:34 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:49.621 10:14:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:49.880 [2024-07-14 10:14:34.659421] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:49.880 10:14:34 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:49.880 10:14:34 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:49.880 10:14:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.880 10:14:34 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:49.880 10:14:34 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:49.880 10:14:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.880 10:14:34 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:49.880 10:14:34 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:49.880 10:14:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:50.139 MallocBdevForConfigChangeCheck 00:05:50.139 10:14:34 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:50.139 10:14:34 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:50.139 10:14:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.139 10:14:34 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:50.139 10:14:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:50.398 10:14:35 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:50.398 INFO: shutting down applications... 00:05:50.398 10:14:35 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:50.398 10:14:35 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:50.398 10:14:35 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:50.398 10:14:35 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:52.305 Calling clear_iscsi_subsystem 00:05:52.305 Calling clear_nvmf_subsystem 00:05:52.305 Calling clear_nbd_subsystem 00:05:52.305 Calling clear_ublk_subsystem 00:05:52.305 Calling clear_vhost_blk_subsystem 00:05:52.305 Calling clear_vhost_scsi_subsystem 00:05:52.305 Calling clear_bdev_subsystem 00:05:52.305 10:14:36 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:52.305 10:14:36 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:52.305 10:14:36 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:52.305 10:14:36 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:52.305 10:14:36 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:52.305 10:14:36 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:52.305 10:14:37 json_config -- json_config/json_config.sh@345 -- # break 00:05:52.305 10:14:37 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:52.305 10:14:37 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:52.305 10:14:37 json_config -- json_config/common.sh@31 -- # local app=target 00:05:52.305 10:14:37 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:52.305 10:14:37 json_config -- json_config/common.sh@35 -- # [[ -n 2207473 ]] 00:05:52.305 10:14:37 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2207473 00:05:52.305 10:14:37 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:52.305 10:14:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:52.305 10:14:37 json_config -- json_config/common.sh@41 -- # kill -0 2207473 00:05:52.305 10:14:37 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:52.874 10:14:37 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:52.874 10:14:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:52.874 10:14:37 json_config -- json_config/common.sh@41 -- # kill -0 2207473 00:05:52.874 10:14:37 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:52.874 10:14:37 json_config -- json_config/common.sh@43 -- # break 00:05:52.874 10:14:37 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:52.874 10:14:37 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:52.874 SPDK target shutdown done 00:05:52.874 10:14:37 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:52.874 INFO: relaunching applications... 00:05:52.874 10:14:37 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:52.874 10:14:37 json_config -- json_config/common.sh@9 -- # local app=target 00:05:52.874 10:14:37 json_config -- json_config/common.sh@10 -- # shift 00:05:52.874 10:14:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:52.874 10:14:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:52.874 10:14:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:52.874 10:14:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:52.874 10:14:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:52.874 10:14:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2208985 00:05:52.874 10:14:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:52.874 Waiting for target to run... 00:05:52.874 10:14:37 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:52.874 10:14:37 json_config -- json_config/common.sh@25 -- # waitforlisten 2208985 /var/tmp/spdk_tgt.sock 00:05:52.874 10:14:37 json_config -- common/autotest_common.sh@829 -- # '[' -z 2208985 ']' 00:05:52.874 10:14:37 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:52.874 10:14:37 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.874 10:14:37 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:52.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:52.874 10:14:37 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.874 10:14:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.874 [2024-07-14 10:14:37.807239] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:52.874 [2024-07-14 10:14:37.807286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2208985 ] 00:05:52.874 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.444 [2024-07-14 10:14:38.260397] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.444 [2024-07-14 10:14:38.293146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.731 [2024-07-14 10:14:41.287271] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:56.731 [2024-07-14 10:14:41.319555] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:56.989 10:14:41 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.989 10:14:41 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:56.989 10:14:41 json_config -- json_config/common.sh@26 -- # echo '' 00:05:56.989 00:05:56.989 10:14:41 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:56.989 10:14:41 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:56.989 INFO: Checking if target configuration is the same... 00:05:56.989 10:14:41 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:56.989 10:14:41 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:56.989 10:14:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:56.989 + '[' 2 -ne 2 ']' 00:05:56.989 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:56.989 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:57.248 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:57.248 +++ basename /dev/fd/62 00:05:57.248 ++ mktemp /tmp/62.XXX 00:05:57.248 + tmp_file_1=/tmp/62.bVb 00:05:57.248 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:57.248 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:57.248 + tmp_file_2=/tmp/spdk_tgt_config.json.C8O 00:05:57.248 + ret=0 00:05:57.248 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:57.506 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:57.506 + diff -u /tmp/62.bVb /tmp/spdk_tgt_config.json.C8O 00:05:57.506 + echo 'INFO: JSON config files are the same' 00:05:57.506 INFO: JSON config files are the same 00:05:57.506 + rm /tmp/62.bVb /tmp/spdk_tgt_config.json.C8O 00:05:57.506 + exit 0 00:05:57.506 10:14:42 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:57.506 10:14:42 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:57.506 INFO: changing configuration and checking if this can be detected... 00:05:57.506 10:14:42 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:57.506 10:14:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:57.765 10:14:42 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:57.765 10:14:42 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:57.765 10:14:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:57.765 + '[' 2 -ne 2 ']' 00:05:57.765 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:57.765 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:57.765 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:57.765 +++ basename /dev/fd/62 00:05:57.765 ++ mktemp /tmp/62.XXX 00:05:57.765 + tmp_file_1=/tmp/62.i3s 00:05:57.765 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:57.765 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:57.765 + tmp_file_2=/tmp/spdk_tgt_config.json.KZF 00:05:57.765 + ret=0 00:05:57.765 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:58.023 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:58.023 + diff -u /tmp/62.i3s /tmp/spdk_tgt_config.json.KZF 00:05:58.023 + ret=1 00:05:58.023 + echo '=== Start of file: /tmp/62.i3s ===' 00:05:58.023 + cat /tmp/62.i3s 00:05:58.023 + echo '=== End of file: /tmp/62.i3s ===' 00:05:58.023 + echo '' 00:05:58.023 + echo '=== Start of file: /tmp/spdk_tgt_config.json.KZF ===' 00:05:58.023 + cat /tmp/spdk_tgt_config.json.KZF 00:05:58.023 + echo '=== End of file: /tmp/spdk_tgt_config.json.KZF ===' 00:05:58.023 + echo '' 00:05:58.023 + rm /tmp/62.i3s /tmp/spdk_tgt_config.json.KZF 00:05:58.023 + exit 1 00:05:58.023 10:14:42 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:58.023 INFO: configuration change detected. 00:05:58.023 10:14:42 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:58.023 10:14:42 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:58.023 10:14:42 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:58.023 10:14:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.023 10:14:42 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:58.023 10:14:42 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:58.023 10:14:42 json_config -- json_config/json_config.sh@317 -- # [[ -n 2208985 ]] 00:05:58.023 10:14:42 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:58.023 10:14:42 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:58.023 10:14:42 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:58.023 10:14:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.023 10:14:42 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:58.023 10:14:42 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:58.023 10:14:42 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:58.023 10:14:42 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:58.023 10:14:42 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:58.023 10:14:42 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:58.023 10:14:42 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:58.023 10:14:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.023 10:14:42 json_config -- json_config/json_config.sh@323 -- # killprocess 2208985 00:05:58.023 10:14:42 json_config -- common/autotest_common.sh@948 -- # '[' -z 2208985 ']' 00:05:58.023 10:14:42 json_config -- common/autotest_common.sh@952 -- # kill -0 2208985 00:05:58.023 10:14:42 json_config -- common/autotest_common.sh@953 -- # uname 00:05:58.023 10:14:42 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.024 10:14:42 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2208985 00:05:58.024 10:14:42 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.024 10:14:42 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.024 10:14:42 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2208985' 00:05:58.024 killing process with pid 2208985 00:05:58.024 10:14:42 json_config -- common/autotest_common.sh@967 -- # kill 2208985 00:05:58.024 10:14:42 json_config -- common/autotest_common.sh@972 -- # wait 2208985 00:05:59.925 10:14:44 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:59.925 10:14:44 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:59.925 10:14:44 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:59.925 10:14:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.925 10:14:44 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:59.925 10:14:44 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:59.925 INFO: Success 00:05:59.925 00:05:59.925 real 0m15.385s 00:05:59.925 user 0m16.167s 00:05:59.925 sys 0m2.060s 00:05:59.925 10:14:44 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.925 10:14:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.925 ************************************ 00:05:59.925 END TEST json_config 00:05:59.925 ************************************ 00:05:59.925 10:14:44 -- common/autotest_common.sh@1142 -- # return 0 00:05:59.925 10:14:44 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:59.925 10:14:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.925 10:14:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.925 10:14:44 -- common/autotest_common.sh@10 -- # set +x 00:05:59.925 ************************************ 00:05:59.925 START TEST json_config_extra_key 00:05:59.925 ************************************ 00:05:59.925 10:14:44 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:59.925 10:14:44 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:59.925 10:14:44 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:59.925 10:14:44 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:59.925 10:14:44 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:59.925 10:14:44 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:59.925 10:14:44 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:59.925 10:14:44 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:59.925 10:14:44 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:59.925 10:14:44 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:59.925 10:14:44 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:59.925 10:14:44 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:59.925 10:14:44 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:59.925 10:14:44 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:59.925 10:14:44 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:59.925 10:14:44 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:59.925 10:14:44 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:59.925 10:14:44 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:59.925 10:14:44 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:59.925 10:14:44 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:59.925 10:14:44 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.925 10:14:44 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.925 10:14:44 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.925 10:14:44 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.925 10:14:44 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.925 10:14:44 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.925 10:14:44 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:59.925 10:14:44 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.925 10:14:44 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:59.925 10:14:44 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:59.925 10:14:44 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:59.925 10:14:44 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:59.925 10:14:44 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:59.925 10:14:44 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:59.925 10:14:44 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:59.925 10:14:44 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:59.925 10:14:44 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:59.925 10:14:44 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:59.925 10:14:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:59.925 10:14:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:59.925 10:14:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:59.925 10:14:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:59.926 10:14:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:59.926 10:14:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:59.926 10:14:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:59.926 10:14:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:59.926 10:14:44 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:59.926 10:14:44 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:59.926 INFO: launching applications... 00:05:59.926 10:14:44 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:59.926 10:14:44 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:59.926 10:14:44 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:59.926 10:14:44 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:59.926 10:14:44 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:59.926 10:14:44 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:59.926 10:14:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:59.926 10:14:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:59.926 10:14:44 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2210263 00:05:59.926 10:14:44 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:59.926 Waiting for target to run... 00:05:59.926 10:14:44 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2210263 /var/tmp/spdk_tgt.sock 00:05:59.926 10:14:44 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2210263 ']' 00:05:59.926 10:14:44 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:59.926 10:14:44 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:59.926 10:14:44 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.926 10:14:44 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:59.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:59.926 10:14:44 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.926 10:14:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:59.926 [2024-07-14 10:14:44.717354] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:59.926 [2024-07-14 10:14:44.717405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2210263 ] 00:05:59.926 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.184 [2024-07-14 10:14:45.163755] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.442 [2024-07-14 10:14:45.196390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.701 10:14:45 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.701 10:14:45 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:00.701 10:14:45 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:00.701 00:06:00.701 10:14:45 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:00.701 INFO: shutting down applications... 00:06:00.701 10:14:45 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:00.701 10:14:45 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:00.701 10:14:45 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:00.701 10:14:45 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2210263 ]] 00:06:00.701 10:14:45 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2210263 00:06:00.701 10:14:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:00.701 10:14:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:00.701 10:14:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2210263 00:06:00.701 10:14:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:01.268 10:14:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:01.268 10:14:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:01.268 10:14:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2210263 00:06:01.268 10:14:46 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:01.268 10:14:46 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:01.268 10:14:46 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:01.268 10:14:46 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:01.268 SPDK target shutdown done 00:06:01.268 10:14:46 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:01.268 Success 00:06:01.268 00:06:01.268 real 0m1.453s 00:06:01.268 user 0m1.064s 00:06:01.268 sys 0m0.531s 00:06:01.268 10:14:46 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.268 10:14:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:01.268 ************************************ 00:06:01.268 END TEST json_config_extra_key 00:06:01.268 ************************************ 00:06:01.268 10:14:46 -- common/autotest_common.sh@1142 -- # return 0 00:06:01.268 10:14:46 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:01.268 10:14:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.268 10:14:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.268 10:14:46 -- common/autotest_common.sh@10 -- # set +x 00:06:01.268 ************************************ 00:06:01.269 START TEST alias_rpc 00:06:01.269 ************************************ 00:06:01.269 10:14:46 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:01.269 * Looking for test storage... 00:06:01.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:01.269 10:14:46 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:01.269 10:14:46 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2210544 00:06:01.269 10:14:46 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.269 10:14:46 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2210544 00:06:01.269 10:14:46 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2210544 ']' 00:06:01.269 10:14:46 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.269 10:14:46 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.269 10:14:46 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.269 10:14:46 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.269 10:14:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.269 [2024-07-14 10:14:46.222004] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:01.269 [2024-07-14 10:14:46.222057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2210544 ] 00:06:01.269 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.528 [2024-07-14 10:14:46.289172] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.528 [2024-07-14 10:14:46.330253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.788 10:14:46 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.788 10:14:46 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:01.788 10:14:46 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:01.788 10:14:46 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2210544 00:06:01.788 10:14:46 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2210544 ']' 00:06:01.788 10:14:46 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2210544 00:06:01.788 10:14:46 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:01.788 10:14:46 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.788 10:14:46 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2210544 00:06:01.788 10:14:46 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.788 10:14:46 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.788 10:14:46 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2210544' 00:06:01.788 killing process with pid 2210544 00:06:01.788 10:14:46 alias_rpc -- common/autotest_common.sh@967 -- # kill 2210544 00:06:01.788 10:14:46 alias_rpc -- common/autotest_common.sh@972 -- # wait 2210544 00:06:02.356 00:06:02.356 real 0m0.960s 00:06:02.356 user 0m0.946s 00:06:02.356 sys 0m0.382s 00:06:02.356 10:14:47 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.356 10:14:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.356 ************************************ 00:06:02.356 END TEST alias_rpc 00:06:02.356 ************************************ 00:06:02.356 10:14:47 -- common/autotest_common.sh@1142 -- # return 0 00:06:02.356 10:14:47 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:02.356 10:14:47 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:02.356 10:14:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.356 10:14:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.356 10:14:47 -- common/autotest_common.sh@10 -- # set +x 00:06:02.356 ************************************ 00:06:02.356 START TEST spdkcli_tcp 00:06:02.356 ************************************ 00:06:02.356 10:14:47 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:02.356 * Looking for test storage... 00:06:02.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:02.356 10:14:47 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:02.356 10:14:47 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:02.356 10:14:47 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:02.356 10:14:47 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:02.356 10:14:47 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:02.357 10:14:47 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:02.357 10:14:47 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:02.357 10:14:47 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:02.357 10:14:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:02.357 10:14:47 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2210824 00:06:02.357 10:14:47 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2210824 00:06:02.357 10:14:47 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:02.357 10:14:47 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2210824 ']' 00:06:02.357 10:14:47 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.357 10:14:47 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.357 10:14:47 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.357 10:14:47 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.357 10:14:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:02.357 [2024-07-14 10:14:47.264970] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:02.357 [2024-07-14 10:14:47.265017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2210824 ] 00:06:02.357 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.357 [2024-07-14 10:14:47.330046] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.616 [2024-07-14 10:14:47.371206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.616 [2024-07-14 10:14:47.371208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.616 10:14:47 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.616 10:14:47 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:02.616 10:14:47 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2210838 00:06:02.616 10:14:47 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:02.616 10:14:47 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:02.875 [ 00:06:02.876 "bdev_malloc_delete", 00:06:02.876 "bdev_malloc_create", 00:06:02.876 "bdev_null_resize", 00:06:02.876 "bdev_null_delete", 00:06:02.876 "bdev_null_create", 00:06:02.876 "bdev_nvme_cuse_unregister", 00:06:02.876 "bdev_nvme_cuse_register", 00:06:02.876 "bdev_opal_new_user", 00:06:02.876 "bdev_opal_set_lock_state", 00:06:02.876 "bdev_opal_delete", 00:06:02.876 "bdev_opal_get_info", 00:06:02.876 "bdev_opal_create", 00:06:02.876 "bdev_nvme_opal_revert", 00:06:02.876 "bdev_nvme_opal_init", 00:06:02.876 "bdev_nvme_send_cmd", 00:06:02.876 "bdev_nvme_get_path_iostat", 00:06:02.876 "bdev_nvme_get_mdns_discovery_info", 00:06:02.876 "bdev_nvme_stop_mdns_discovery", 00:06:02.876 "bdev_nvme_start_mdns_discovery", 00:06:02.876 "bdev_nvme_set_multipath_policy", 00:06:02.876 "bdev_nvme_set_preferred_path", 00:06:02.876 "bdev_nvme_get_io_paths", 00:06:02.876 "bdev_nvme_remove_error_injection", 00:06:02.876 "bdev_nvme_add_error_injection", 00:06:02.876 "bdev_nvme_get_discovery_info", 00:06:02.876 "bdev_nvme_stop_discovery", 00:06:02.876 "bdev_nvme_start_discovery", 00:06:02.876 "bdev_nvme_get_controller_health_info", 00:06:02.876 "bdev_nvme_disable_controller", 00:06:02.876 "bdev_nvme_enable_controller", 00:06:02.876 "bdev_nvme_reset_controller", 00:06:02.876 "bdev_nvme_get_transport_statistics", 00:06:02.876 "bdev_nvme_apply_firmware", 00:06:02.876 "bdev_nvme_detach_controller", 00:06:02.876 "bdev_nvme_get_controllers", 00:06:02.876 "bdev_nvme_attach_controller", 00:06:02.876 "bdev_nvme_set_hotplug", 00:06:02.876 "bdev_nvme_set_options", 00:06:02.876 "bdev_passthru_delete", 00:06:02.876 "bdev_passthru_create", 00:06:02.876 "bdev_lvol_set_parent_bdev", 00:06:02.876 "bdev_lvol_set_parent", 00:06:02.876 "bdev_lvol_check_shallow_copy", 00:06:02.876 "bdev_lvol_start_shallow_copy", 00:06:02.876 "bdev_lvol_grow_lvstore", 00:06:02.876 "bdev_lvol_get_lvols", 00:06:02.876 "bdev_lvol_get_lvstores", 00:06:02.876 "bdev_lvol_delete", 00:06:02.876 "bdev_lvol_set_read_only", 00:06:02.876 "bdev_lvol_resize", 00:06:02.876 "bdev_lvol_decouple_parent", 00:06:02.876 "bdev_lvol_inflate", 00:06:02.876 "bdev_lvol_rename", 00:06:02.876 "bdev_lvol_clone_bdev", 00:06:02.876 "bdev_lvol_clone", 00:06:02.876 "bdev_lvol_snapshot", 00:06:02.876 "bdev_lvol_create", 00:06:02.876 "bdev_lvol_delete_lvstore", 00:06:02.876 "bdev_lvol_rename_lvstore", 00:06:02.876 "bdev_lvol_create_lvstore", 00:06:02.876 "bdev_raid_set_options", 00:06:02.876 "bdev_raid_remove_base_bdev", 00:06:02.876 "bdev_raid_add_base_bdev", 00:06:02.876 "bdev_raid_delete", 00:06:02.876 "bdev_raid_create", 00:06:02.876 "bdev_raid_get_bdevs", 00:06:02.876 "bdev_error_inject_error", 00:06:02.876 "bdev_error_delete", 00:06:02.876 "bdev_error_create", 00:06:02.876 "bdev_split_delete", 00:06:02.876 "bdev_split_create", 00:06:02.876 "bdev_delay_delete", 00:06:02.876 "bdev_delay_create", 00:06:02.876 "bdev_delay_update_latency", 00:06:02.876 "bdev_zone_block_delete", 00:06:02.876 "bdev_zone_block_create", 00:06:02.876 "blobfs_create", 00:06:02.876 "blobfs_detect", 00:06:02.876 "blobfs_set_cache_size", 00:06:02.876 "bdev_aio_delete", 00:06:02.876 "bdev_aio_rescan", 00:06:02.876 "bdev_aio_create", 00:06:02.876 "bdev_ftl_set_property", 00:06:02.876 "bdev_ftl_get_properties", 00:06:02.876 "bdev_ftl_get_stats", 00:06:02.876 "bdev_ftl_unmap", 00:06:02.876 "bdev_ftl_unload", 00:06:02.876 "bdev_ftl_delete", 00:06:02.876 "bdev_ftl_load", 00:06:02.876 "bdev_ftl_create", 00:06:02.876 "bdev_virtio_attach_controller", 00:06:02.876 "bdev_virtio_scsi_get_devices", 00:06:02.876 "bdev_virtio_detach_controller", 00:06:02.876 "bdev_virtio_blk_set_hotplug", 00:06:02.876 "bdev_iscsi_delete", 00:06:02.876 "bdev_iscsi_create", 00:06:02.876 "bdev_iscsi_set_options", 00:06:02.876 "accel_error_inject_error", 00:06:02.876 "ioat_scan_accel_module", 00:06:02.876 "dsa_scan_accel_module", 00:06:02.876 "iaa_scan_accel_module", 00:06:02.876 "vfu_virtio_create_scsi_endpoint", 00:06:02.876 "vfu_virtio_scsi_remove_target", 00:06:02.876 "vfu_virtio_scsi_add_target", 00:06:02.876 "vfu_virtio_create_blk_endpoint", 00:06:02.876 "vfu_virtio_delete_endpoint", 00:06:02.876 "keyring_file_remove_key", 00:06:02.876 "keyring_file_add_key", 00:06:02.876 "keyring_linux_set_options", 00:06:02.876 "iscsi_get_histogram", 00:06:02.876 "iscsi_enable_histogram", 00:06:02.876 "iscsi_set_options", 00:06:02.876 "iscsi_get_auth_groups", 00:06:02.876 "iscsi_auth_group_remove_secret", 00:06:02.876 "iscsi_auth_group_add_secret", 00:06:02.876 "iscsi_delete_auth_group", 00:06:02.876 "iscsi_create_auth_group", 00:06:02.876 "iscsi_set_discovery_auth", 00:06:02.876 "iscsi_get_options", 00:06:02.876 "iscsi_target_node_request_logout", 00:06:02.876 "iscsi_target_node_set_redirect", 00:06:02.876 "iscsi_target_node_set_auth", 00:06:02.876 "iscsi_target_node_add_lun", 00:06:02.876 "iscsi_get_stats", 00:06:02.876 "iscsi_get_connections", 00:06:02.876 "iscsi_portal_group_set_auth", 00:06:02.876 "iscsi_start_portal_group", 00:06:02.876 "iscsi_delete_portal_group", 00:06:02.876 "iscsi_create_portal_group", 00:06:02.876 "iscsi_get_portal_groups", 00:06:02.876 "iscsi_delete_target_node", 00:06:02.876 "iscsi_target_node_remove_pg_ig_maps", 00:06:02.876 "iscsi_target_node_add_pg_ig_maps", 00:06:02.876 "iscsi_create_target_node", 00:06:02.876 "iscsi_get_target_nodes", 00:06:02.876 "iscsi_delete_initiator_group", 00:06:02.876 "iscsi_initiator_group_remove_initiators", 00:06:02.876 "iscsi_initiator_group_add_initiators", 00:06:02.876 "iscsi_create_initiator_group", 00:06:02.876 "iscsi_get_initiator_groups", 00:06:02.876 "nvmf_set_crdt", 00:06:02.876 "nvmf_set_config", 00:06:02.876 "nvmf_set_max_subsystems", 00:06:02.876 "nvmf_stop_mdns_prr", 00:06:02.876 "nvmf_publish_mdns_prr", 00:06:02.876 "nvmf_subsystem_get_listeners", 00:06:02.876 "nvmf_subsystem_get_qpairs", 00:06:02.876 "nvmf_subsystem_get_controllers", 00:06:02.876 "nvmf_get_stats", 00:06:02.876 "nvmf_get_transports", 00:06:02.876 "nvmf_create_transport", 00:06:02.876 "nvmf_get_targets", 00:06:02.876 "nvmf_delete_target", 00:06:02.876 "nvmf_create_target", 00:06:02.876 "nvmf_subsystem_allow_any_host", 00:06:02.876 "nvmf_subsystem_remove_host", 00:06:02.876 "nvmf_subsystem_add_host", 00:06:02.876 "nvmf_ns_remove_host", 00:06:02.876 "nvmf_ns_add_host", 00:06:02.876 "nvmf_subsystem_remove_ns", 00:06:02.876 "nvmf_subsystem_add_ns", 00:06:02.876 "nvmf_subsystem_listener_set_ana_state", 00:06:02.876 "nvmf_discovery_get_referrals", 00:06:02.876 "nvmf_discovery_remove_referral", 00:06:02.876 "nvmf_discovery_add_referral", 00:06:02.876 "nvmf_subsystem_remove_listener", 00:06:02.876 "nvmf_subsystem_add_listener", 00:06:02.876 "nvmf_delete_subsystem", 00:06:02.876 "nvmf_create_subsystem", 00:06:02.876 "nvmf_get_subsystems", 00:06:02.876 "env_dpdk_get_mem_stats", 00:06:02.876 "nbd_get_disks", 00:06:02.876 "nbd_stop_disk", 00:06:02.876 "nbd_start_disk", 00:06:02.876 "ublk_recover_disk", 00:06:02.876 "ublk_get_disks", 00:06:02.876 "ublk_stop_disk", 00:06:02.876 "ublk_start_disk", 00:06:02.876 "ublk_destroy_target", 00:06:02.876 "ublk_create_target", 00:06:02.876 "virtio_blk_create_transport", 00:06:02.876 "virtio_blk_get_transports", 00:06:02.876 "vhost_controller_set_coalescing", 00:06:02.876 "vhost_get_controllers", 00:06:02.876 "vhost_delete_controller", 00:06:02.876 "vhost_create_blk_controller", 00:06:02.876 "vhost_scsi_controller_remove_target", 00:06:02.876 "vhost_scsi_controller_add_target", 00:06:02.876 "vhost_start_scsi_controller", 00:06:02.876 "vhost_create_scsi_controller", 00:06:02.876 "thread_set_cpumask", 00:06:02.876 "framework_get_governor", 00:06:02.876 "framework_get_scheduler", 00:06:02.876 "framework_set_scheduler", 00:06:02.876 "framework_get_reactors", 00:06:02.876 "thread_get_io_channels", 00:06:02.876 "thread_get_pollers", 00:06:02.876 "thread_get_stats", 00:06:02.876 "framework_monitor_context_switch", 00:06:02.876 "spdk_kill_instance", 00:06:02.876 "log_enable_timestamps", 00:06:02.876 "log_get_flags", 00:06:02.876 "log_clear_flag", 00:06:02.876 "log_set_flag", 00:06:02.876 "log_get_level", 00:06:02.876 "log_set_level", 00:06:02.876 "log_get_print_level", 00:06:02.876 "log_set_print_level", 00:06:02.876 "framework_enable_cpumask_locks", 00:06:02.876 "framework_disable_cpumask_locks", 00:06:02.876 "framework_wait_init", 00:06:02.876 "framework_start_init", 00:06:02.876 "scsi_get_devices", 00:06:02.876 "bdev_get_histogram", 00:06:02.876 "bdev_enable_histogram", 00:06:02.876 "bdev_set_qos_limit", 00:06:02.876 "bdev_set_qd_sampling_period", 00:06:02.876 "bdev_get_bdevs", 00:06:02.876 "bdev_reset_iostat", 00:06:02.876 "bdev_get_iostat", 00:06:02.876 "bdev_examine", 00:06:02.876 "bdev_wait_for_examine", 00:06:02.877 "bdev_set_options", 00:06:02.877 "notify_get_notifications", 00:06:02.877 "notify_get_types", 00:06:02.877 "accel_get_stats", 00:06:02.877 "accel_set_options", 00:06:02.877 "accel_set_driver", 00:06:02.877 "accel_crypto_key_destroy", 00:06:02.877 "accel_crypto_keys_get", 00:06:02.877 "accel_crypto_key_create", 00:06:02.877 "accel_assign_opc", 00:06:02.877 "accel_get_module_info", 00:06:02.877 "accel_get_opc_assignments", 00:06:02.877 "vmd_rescan", 00:06:02.877 "vmd_remove_device", 00:06:02.877 "vmd_enable", 00:06:02.877 "sock_get_default_impl", 00:06:02.877 "sock_set_default_impl", 00:06:02.877 "sock_impl_set_options", 00:06:02.877 "sock_impl_get_options", 00:06:02.877 "iobuf_get_stats", 00:06:02.877 "iobuf_set_options", 00:06:02.877 "keyring_get_keys", 00:06:02.877 "framework_get_pci_devices", 00:06:02.877 "framework_get_config", 00:06:02.877 "framework_get_subsystems", 00:06:02.877 "vfu_tgt_set_base_path", 00:06:02.877 "trace_get_info", 00:06:02.877 "trace_get_tpoint_group_mask", 00:06:02.877 "trace_disable_tpoint_group", 00:06:02.877 "trace_enable_tpoint_group", 00:06:02.877 "trace_clear_tpoint_mask", 00:06:02.877 "trace_set_tpoint_mask", 00:06:02.877 "spdk_get_version", 00:06:02.877 "rpc_get_methods" 00:06:02.877 ] 00:06:02.877 10:14:47 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:02.877 10:14:47 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:02.877 10:14:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:02.877 10:14:47 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:02.877 10:14:47 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2210824 00:06:02.877 10:14:47 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2210824 ']' 00:06:02.877 10:14:47 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2210824 00:06:02.877 10:14:47 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:02.877 10:14:47 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:02.877 10:14:47 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2210824 00:06:02.877 10:14:47 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:02.877 10:14:47 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:02.877 10:14:47 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2210824' 00:06:02.877 killing process with pid 2210824 00:06:02.877 10:14:47 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2210824 00:06:02.877 10:14:47 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2210824 00:06:03.136 00:06:03.136 real 0m0.984s 00:06:03.136 user 0m1.664s 00:06:03.136 sys 0m0.395s 00:06:03.136 10:14:48 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.136 10:14:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:03.137 ************************************ 00:06:03.137 END TEST spdkcli_tcp 00:06:03.137 ************************************ 00:06:03.435 10:14:48 -- common/autotest_common.sh@1142 -- # return 0 00:06:03.435 10:14:48 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:03.435 10:14:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.435 10:14:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.435 10:14:48 -- common/autotest_common.sh@10 -- # set +x 00:06:03.435 ************************************ 00:06:03.435 START TEST dpdk_mem_utility 00:06:03.435 ************************************ 00:06:03.435 10:14:48 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:03.435 * Looking for test storage... 00:06:03.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:03.435 10:14:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:03.435 10:14:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2211120 00:06:03.435 10:14:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2211120 00:06:03.435 10:14:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.435 10:14:48 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2211120 ']' 00:06:03.435 10:14:48 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.435 10:14:48 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.435 10:14:48 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.435 10:14:48 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.435 10:14:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:03.435 [2024-07-14 10:14:48.308362] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:03.435 [2024-07-14 10:14:48.308409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2211120 ] 00:06:03.435 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.435 [2024-07-14 10:14:48.377083] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.696 [2024-07-14 10:14:48.417729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.696 10:14:48 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.696 10:14:48 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:03.696 10:14:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:03.696 10:14:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:03.696 10:14:48 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.696 10:14:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:03.696 { 00:06:03.696 "filename": "/tmp/spdk_mem_dump.txt" 00:06:03.696 } 00:06:03.696 10:14:48 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.696 10:14:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:03.696 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:03.696 1 heaps totaling size 814.000000 MiB 00:06:03.696 size: 814.000000 MiB heap id: 0 00:06:03.696 end heaps---------- 00:06:03.696 8 mempools totaling size 598.116089 MiB 00:06:03.696 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:03.696 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:03.696 size: 84.521057 MiB name: bdev_io_2211120 00:06:03.696 size: 51.011292 MiB name: evtpool_2211120 00:06:03.696 size: 50.003479 MiB name: msgpool_2211120 00:06:03.696 size: 21.763794 MiB name: PDU_Pool 00:06:03.696 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:03.696 size: 0.026123 MiB name: Session_Pool 00:06:03.696 end mempools------- 00:06:03.696 6 memzones totaling size 4.142822 MiB 00:06:03.696 size: 1.000366 MiB name: RG_ring_0_2211120 00:06:03.696 size: 1.000366 MiB name: RG_ring_1_2211120 00:06:03.696 size: 1.000366 MiB name: RG_ring_4_2211120 00:06:03.696 size: 1.000366 MiB name: RG_ring_5_2211120 00:06:03.696 size: 0.125366 MiB name: RG_ring_2_2211120 00:06:03.696 size: 0.015991 MiB name: RG_ring_3_2211120 00:06:03.696 end memzones------- 00:06:03.696 10:14:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:03.956 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:03.956 list of free elements. size: 12.519348 MiB 00:06:03.956 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:03.956 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:03.956 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:03.956 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:03.956 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:03.956 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:03.956 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:03.956 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:03.956 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:03.956 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:03.956 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:03.956 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:03.956 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:03.956 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:03.956 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:03.956 list of standard malloc elements. size: 199.218079 MiB 00:06:03.956 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:03.956 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:03.957 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:03.957 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:03.957 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:03.957 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:03.957 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:03.957 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:03.957 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:03.957 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:03.957 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:03.957 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:03.957 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:03.957 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:03.957 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:03.957 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:03.957 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:03.957 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:03.957 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:03.957 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:03.957 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:03.957 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:03.957 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:03.957 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:03.957 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:03.957 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:03.957 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:03.957 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:03.957 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:03.957 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:03.957 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:03.957 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:03.957 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:03.957 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:03.957 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:03.957 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:03.957 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:03.957 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:03.957 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:03.957 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:03.957 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:03.957 list of memzone associated elements. size: 602.262573 MiB 00:06:03.957 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:03.957 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:03.957 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:03.957 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:03.957 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:03.957 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2211120_0 00:06:03.957 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:03.957 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2211120_0 00:06:03.957 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:03.957 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2211120_0 00:06:03.957 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:03.957 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:03.957 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:03.957 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:03.957 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:03.957 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2211120 00:06:03.957 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:03.957 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2211120 00:06:03.957 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:03.957 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2211120 00:06:03.957 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:03.957 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:03.957 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:03.957 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:03.957 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:03.957 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:03.957 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:03.957 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:03.957 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:03.957 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2211120 00:06:03.957 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:03.957 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2211120 00:06:03.957 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:03.957 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2211120 00:06:03.957 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:03.957 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2211120 00:06:03.957 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:03.957 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2211120 00:06:03.957 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:03.957 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:03.957 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:03.957 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:03.957 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:03.957 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:03.957 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:03.957 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2211120 00:06:03.957 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:03.957 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:03.957 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:03.957 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:03.957 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:03.957 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2211120 00:06:03.957 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:03.957 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:03.957 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:03.957 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2211120 00:06:03.957 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:03.957 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2211120 00:06:03.957 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:03.957 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:03.957 10:14:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:03.957 10:14:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2211120 00:06:03.957 10:14:48 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2211120 ']' 00:06:03.957 10:14:48 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2211120 00:06:03.957 10:14:48 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:03.957 10:14:48 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:03.957 10:14:48 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2211120 00:06:03.957 10:14:48 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:03.957 10:14:48 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:03.957 10:14:48 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2211120' 00:06:03.957 killing process with pid 2211120 00:06:03.957 10:14:48 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2211120 00:06:03.957 10:14:48 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2211120 00:06:04.217 00:06:04.217 real 0m0.881s 00:06:04.217 user 0m0.807s 00:06:04.217 sys 0m0.381s 00:06:04.217 10:14:49 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.217 10:14:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:04.217 ************************************ 00:06:04.217 END TEST dpdk_mem_utility 00:06:04.217 ************************************ 00:06:04.217 10:14:49 -- common/autotest_common.sh@1142 -- # return 0 00:06:04.217 10:14:49 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:04.217 10:14:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.217 10:14:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.217 10:14:49 -- common/autotest_common.sh@10 -- # set +x 00:06:04.217 ************************************ 00:06:04.217 START TEST event 00:06:04.217 ************************************ 00:06:04.217 10:14:49 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:04.217 * Looking for test storage... 00:06:04.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:04.477 10:14:49 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:04.477 10:14:49 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:04.477 10:14:49 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:04.477 10:14:49 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:04.477 10:14:49 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.477 10:14:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.477 ************************************ 00:06:04.477 START TEST event_perf 00:06:04.477 ************************************ 00:06:04.477 10:14:49 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:04.477 Running I/O for 1 seconds...[2024-07-14 10:14:49.258511] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:04.478 [2024-07-14 10:14:49.258591] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2211284 ] 00:06:04.478 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.478 [2024-07-14 10:14:49.332515] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:04.478 [2024-07-14 10:14:49.375674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.478 [2024-07-14 10:14:49.375782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.478 [2024-07-14 10:14:49.375889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.478 Running I/O for 1 seconds...[2024-07-14 10:14:49.375890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.856 00:06:05.856 lcore 0: 208793 00:06:05.856 lcore 1: 208791 00:06:05.856 lcore 2: 208792 00:06:05.856 lcore 3: 208793 00:06:05.856 done. 00:06:05.856 00:06:05.856 real 0m1.205s 00:06:05.856 user 0m4.111s 00:06:05.856 sys 0m0.092s 00:06:05.856 10:14:50 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.856 10:14:50 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:05.856 ************************************ 00:06:05.856 END TEST event_perf 00:06:05.856 ************************************ 00:06:05.856 10:14:50 event -- common/autotest_common.sh@1142 -- # return 0 00:06:05.856 10:14:50 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:05.856 10:14:50 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:05.856 10:14:50 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.856 10:14:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.856 ************************************ 00:06:05.856 START TEST event_reactor 00:06:05.856 ************************************ 00:06:05.856 10:14:50 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:05.856 [2024-07-14 10:14:50.530167] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:05.856 [2024-07-14 10:14:50.530237] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2211471 ] 00:06:05.856 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.856 [2024-07-14 10:14:50.603138] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.856 [2024-07-14 10:14:50.644646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.794 test_start 00:06:06.794 oneshot 00:06:06.794 tick 100 00:06:06.794 tick 100 00:06:06.794 tick 250 00:06:06.794 tick 100 00:06:06.794 tick 100 00:06:06.794 tick 100 00:06:06.794 tick 250 00:06:06.794 tick 500 00:06:06.794 tick 100 00:06:06.794 tick 100 00:06:06.794 tick 250 00:06:06.794 tick 100 00:06:06.794 tick 100 00:06:06.794 test_end 00:06:06.794 00:06:06.794 real 0m1.191s 00:06:06.794 user 0m1.107s 00:06:06.794 sys 0m0.079s 00:06:06.794 10:14:51 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.794 10:14:51 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:06.794 ************************************ 00:06:06.794 END TEST event_reactor 00:06:06.794 ************************************ 00:06:06.794 10:14:51 event -- common/autotest_common.sh@1142 -- # return 0 00:06:06.794 10:14:51 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:06.794 10:14:51 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:06.794 10:14:51 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.794 10:14:51 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.794 ************************************ 00:06:06.794 START TEST event_reactor_perf 00:06:06.794 ************************************ 00:06:06.794 10:14:51 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:07.053 [2024-07-14 10:14:51.787217] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:07.053 [2024-07-14 10:14:51.787285] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2211695 ] 00:06:07.053 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.053 [2024-07-14 10:14:51.858357] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.053 [2024-07-14 10:14:51.898513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.990 test_start 00:06:07.990 test_end 00:06:07.990 Performance: 507821 events per second 00:06:07.990 00:06:07.990 real 0m1.186s 00:06:07.990 user 0m1.099s 00:06:07.990 sys 0m0.082s 00:06:07.990 10:14:52 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.990 10:14:52 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.990 ************************************ 00:06:07.990 END TEST event_reactor_perf 00:06:07.990 ************************************ 00:06:08.250 10:14:52 event -- common/autotest_common.sh@1142 -- # return 0 00:06:08.250 10:14:52 event -- event/event.sh@49 -- # uname -s 00:06:08.250 10:14:52 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:08.250 10:14:52 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:08.250 10:14:52 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.250 10:14:52 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.250 10:14:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.250 ************************************ 00:06:08.250 START TEST event_scheduler 00:06:08.250 ************************************ 00:06:08.250 10:14:53 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:08.250 * Looking for test storage... 00:06:08.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:08.250 10:14:53 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:08.250 10:14:53 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2211969 00:06:08.250 10:14:53 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:08.250 10:14:53 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:08.250 10:14:53 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2211969 00:06:08.250 10:14:53 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2211969 ']' 00:06:08.250 10:14:53 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.250 10:14:53 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.250 10:14:53 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.250 10:14:53 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.250 10:14:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.250 [2024-07-14 10:14:53.166802] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:08.250 [2024-07-14 10:14:53.166851] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2211969 ] 00:06:08.250 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.508 [2024-07-14 10:14:53.235249] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:08.508 [2024-07-14 10:14:53.278157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.508 [2024-07-14 10:14:53.278182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.508 [2024-07-14 10:14:53.278205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.508 [2024-07-14 10:14:53.278206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.508 10:14:53 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.508 10:14:53 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:08.508 10:14:53 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:08.508 10:14:53 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.508 10:14:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.508 [2024-07-14 10:14:53.322976] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:08.508 [2024-07-14 10:14:53.322993] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:08.508 [2024-07-14 10:14:53.323004] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:08.508 [2024-07-14 10:14:53.323009] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:08.508 [2024-07-14 10:14:53.323015] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:08.508 10:14:53 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.508 10:14:53 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:08.508 10:14:53 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.508 10:14:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.508 [2024-07-14 10:14:53.389219] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:08.508 10:14:53 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.508 10:14:53 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:08.508 10:14:53 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.508 10:14:53 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.508 10:14:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.508 ************************************ 00:06:08.508 START TEST scheduler_create_thread 00:06:08.508 ************************************ 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.508 2 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.508 3 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.508 4 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.508 5 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.508 6 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.508 7 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.508 8 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.508 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.766 9 00:06:08.766 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.766 10:14:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:08.766 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.766 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.766 10 00:06:08.766 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.766 10:14:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:08.766 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.766 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.766 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.766 10:14:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:08.766 10:14:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:08.766 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.766 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.766 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.766 10:14:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:08.766 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.766 10:14:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.143 10:14:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.143 10:14:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:10.143 10:14:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:10.143 10:14:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.143 10:14:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.080 10:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.080 00:06:11.080 real 0m2.620s 00:06:11.080 user 0m0.023s 00:06:11.080 sys 0m0.005s 00:06:11.080 10:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.080 10:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.080 ************************************ 00:06:11.080 END TEST scheduler_create_thread 00:06:11.080 ************************************ 00:06:11.339 10:14:56 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:11.339 10:14:56 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:11.339 10:14:56 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2211969 00:06:11.339 10:14:56 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2211969 ']' 00:06:11.339 10:14:56 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2211969 00:06:11.339 10:14:56 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:11.339 10:14:56 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:11.339 10:14:56 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2211969 00:06:11.339 10:14:56 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:11.339 10:14:56 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:11.339 10:14:56 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2211969' 00:06:11.339 killing process with pid 2211969 00:06:11.339 10:14:56 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2211969 00:06:11.339 10:14:56 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2211969 00:06:11.598 [2024-07-14 10:14:56.523241] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:11.857 00:06:11.857 real 0m3.678s 00:06:11.857 user 0m5.521s 00:06:11.857 sys 0m0.360s 00:06:11.857 10:14:56 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.857 10:14:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.857 ************************************ 00:06:11.857 END TEST event_scheduler 00:06:11.857 ************************************ 00:06:11.857 10:14:56 event -- common/autotest_common.sh@1142 -- # return 0 00:06:11.857 10:14:56 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:11.857 10:14:56 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:11.857 10:14:56 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.857 10:14:56 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.857 10:14:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.857 ************************************ 00:06:11.857 START TEST app_repeat 00:06:11.857 ************************************ 00:06:11.857 10:14:56 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:11.857 10:14:56 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.857 10:14:56 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.857 10:14:56 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:11.857 10:14:56 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.857 10:14:56 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:11.857 10:14:56 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:11.857 10:14:56 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:11.857 10:14:56 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2212704 00:06:11.857 10:14:56 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.857 10:14:56 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:11.857 10:14:56 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2212704' 00:06:11.857 Process app_repeat pid: 2212704 00:06:11.857 10:14:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:11.857 10:14:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:11.857 spdk_app_start Round 0 00:06:11.857 10:14:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2212704 /var/tmp/spdk-nbd.sock 00:06:11.857 10:14:56 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2212704 ']' 00:06:11.857 10:14:56 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.857 10:14:56 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.857 10:14:56 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.857 10:14:56 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.857 10:14:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.857 [2024-07-14 10:14:56.820932] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:11.857 [2024-07-14 10:14:56.820988] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2212704 ] 00:06:12.116 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.116 [2024-07-14 10:14:56.891700] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.116 [2024-07-14 10:14:56.931695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.116 [2024-07-14 10:14:56.931696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.116 10:14:57 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.116 10:14:57 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:12.116 10:14:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.375 Malloc0 00:06:12.375 10:14:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.634 Malloc1 00:06:12.634 10:14:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.634 10:14:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.634 10:14:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.634 10:14:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:12.634 10:14:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.634 10:14:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:12.634 10:14:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.634 10:14:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.634 10:14:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.634 10:14:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:12.634 10:14:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.634 10:14:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:12.634 10:14:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:12.634 10:14:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:12.634 10:14:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.634 10:14:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:12.634 /dev/nbd0 00:06:12.634 10:14:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:12.634 10:14:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:12.634 10:14:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:12.634 10:14:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:12.634 10:14:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:12.634 10:14:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:12.634 10:14:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:12.634 10:14:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:12.634 10:14:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:12.634 10:14:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:12.634 10:14:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.634 1+0 records in 00:06:12.634 1+0 records out 00:06:12.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193455 s, 21.2 MB/s 00:06:12.634 10:14:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.894 10:14:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:12.894 10:14:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.894 10:14:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:12.894 10:14:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:12.894 10:14:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.894 10:14:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.894 10:14:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:12.894 /dev/nbd1 00:06:12.894 10:14:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:12.894 10:14:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:12.894 10:14:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:12.894 10:14:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:12.894 10:14:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:12.894 10:14:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:12.894 10:14:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:12.894 10:14:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:12.894 10:14:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:12.894 10:14:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:12.894 10:14:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.894 1+0 records in 00:06:12.894 1+0 records out 00:06:12.894 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222461 s, 18.4 MB/s 00:06:12.894 10:14:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.894 10:14:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:12.894 10:14:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.894 10:14:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:12.894 10:14:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:12.894 10:14:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.894 10:14:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.894 10:14:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.894 10:14:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.894 10:14:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:13.153 { 00:06:13.153 "nbd_device": "/dev/nbd0", 00:06:13.153 "bdev_name": "Malloc0" 00:06:13.153 }, 00:06:13.153 { 00:06:13.153 "nbd_device": "/dev/nbd1", 00:06:13.153 "bdev_name": "Malloc1" 00:06:13.153 } 00:06:13.153 ]' 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:13.153 { 00:06:13.153 "nbd_device": "/dev/nbd0", 00:06:13.153 "bdev_name": "Malloc0" 00:06:13.153 }, 00:06:13.153 { 00:06:13.153 "nbd_device": "/dev/nbd1", 00:06:13.153 "bdev_name": "Malloc1" 00:06:13.153 } 00:06:13.153 ]' 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:13.153 /dev/nbd1' 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:13.153 /dev/nbd1' 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:13.153 256+0 records in 00:06:13.153 256+0 records out 00:06:13.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102874 s, 102 MB/s 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:13.153 256+0 records in 00:06:13.153 256+0 records out 00:06:13.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147244 s, 71.2 MB/s 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:13.153 256+0 records in 00:06:13.153 256+0 records out 00:06:13.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143274 s, 73.2 MB/s 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.153 10:14:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:13.413 10:14:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:13.413 10:14:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:13.413 10:14:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:13.413 10:14:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.413 10:14:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.413 10:14:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:13.413 10:14:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.413 10:14:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.413 10:14:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.413 10:14:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:13.672 10:14:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:13.672 10:14:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:13.672 10:14:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:13.672 10:14:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.672 10:14:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.672 10:14:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:13.672 10:14:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.672 10:14:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.672 10:14:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.672 10:14:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.672 10:14:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.931 10:14:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:13.931 10:14:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:13.931 10:14:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.931 10:14:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.932 10:14:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.932 10:14:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.932 10:14:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:13.932 10:14:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.932 10:14:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.932 10:14:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:13.932 10:14:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:13.932 10:14:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:13.932 10:14:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:14.191 10:14:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:14.191 [2024-07-14 10:14:59.137285] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.191 [2024-07-14 10:14:59.173677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.191 [2024-07-14 10:14:59.173678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.449 [2024-07-14 10:14:59.214537] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:14.449 [2024-07-14 10:14:59.214579] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:17.739 10:15:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:17.739 10:15:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:17.739 spdk_app_start Round 1 00:06:17.739 10:15:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2212704 /var/tmp/spdk-nbd.sock 00:06:17.739 10:15:01 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2212704 ']' 00:06:17.739 10:15:01 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.739 10:15:01 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.739 10:15:01 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.739 10:15:01 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.739 10:15:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.739 10:15:02 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.739 10:15:02 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:17.739 10:15:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.739 Malloc0 00:06:17.739 10:15:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.739 Malloc1 00:06:17.739 10:15:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.739 10:15:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.739 10:15:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.739 10:15:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:17.739 10:15:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.739 10:15:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:17.739 10:15:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.739 10:15:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.739 10:15:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.739 10:15:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:17.739 10:15:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.739 10:15:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:17.739 10:15:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:17.739 10:15:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:17.739 10:15:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.739 10:15:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:17.739 /dev/nbd0 00:06:17.998 10:15:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:17.998 10:15:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:17.998 10:15:02 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:17.998 10:15:02 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:17.998 10:15:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:17.998 10:15:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:17.998 10:15:02 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:17.998 10:15:02 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:17.998 10:15:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:17.998 10:15:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:17.998 10:15:02 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.998 1+0 records in 00:06:17.998 1+0 records out 00:06:17.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206092 s, 19.9 MB/s 00:06:17.998 10:15:02 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.998 10:15:02 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:17.998 10:15:02 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.998 10:15:02 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:17.998 10:15:02 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:17.998 10:15:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.998 10:15:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.998 10:15:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:17.998 /dev/nbd1 00:06:17.998 10:15:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:17.998 10:15:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:17.998 10:15:02 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:17.998 10:15:02 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:17.998 10:15:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:17.998 10:15:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:17.998 10:15:02 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:17.998 10:15:02 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:17.998 10:15:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:17.999 10:15:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:17.999 10:15:02 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.999 1+0 records in 00:06:17.999 1+0 records out 00:06:17.999 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207871 s, 19.7 MB/s 00:06:17.999 10:15:02 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.999 10:15:02 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:17.999 10:15:02 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.999 10:15:02 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:17.999 10:15:02 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:17.999 10:15:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.999 10:15:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.999 10:15:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.999 10:15:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.999 10:15:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.257 10:15:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:18.257 { 00:06:18.257 "nbd_device": "/dev/nbd0", 00:06:18.257 "bdev_name": "Malloc0" 00:06:18.257 }, 00:06:18.257 { 00:06:18.257 "nbd_device": "/dev/nbd1", 00:06:18.257 "bdev_name": "Malloc1" 00:06:18.257 } 00:06:18.257 ]' 00:06:18.257 10:15:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:18.257 { 00:06:18.257 "nbd_device": "/dev/nbd0", 00:06:18.257 "bdev_name": "Malloc0" 00:06:18.257 }, 00:06:18.257 { 00:06:18.257 "nbd_device": "/dev/nbd1", 00:06:18.257 "bdev_name": "Malloc1" 00:06:18.257 } 00:06:18.257 ]' 00:06:18.257 10:15:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.257 10:15:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:18.257 /dev/nbd1' 00:06:18.257 10:15:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:18.257 /dev/nbd1' 00:06:18.257 10:15:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.257 10:15:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:18.257 10:15:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:18.257 10:15:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:18.257 10:15:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:18.257 10:15:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:18.257 10:15:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.257 10:15:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.257 10:15:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:18.257 10:15:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:18.257 10:15:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:18.257 10:15:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:18.257 256+0 records in 00:06:18.257 256+0 records out 00:06:18.257 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103421 s, 101 MB/s 00:06:18.257 10:15:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.257 10:15:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:18.515 256+0 records in 00:06:18.515 256+0 records out 00:06:18.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139406 s, 75.2 MB/s 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:18.515 256+0 records in 00:06:18.515 256+0 records out 00:06:18.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146942 s, 71.4 MB/s 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.515 10:15:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:18.773 10:15:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:18.773 10:15:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:18.773 10:15:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:18.773 10:15:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.773 10:15:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.773 10:15:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:18.773 10:15:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:18.773 10:15:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.773 10:15:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.773 10:15:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.773 10:15:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.030 10:15:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:19.030 10:15:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:19.030 10:15:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.031 10:15:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:19.031 10:15:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.031 10:15:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:19.031 10:15:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:19.031 10:15:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:19.031 10:15:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:19.031 10:15:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:19.031 10:15:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:19.031 10:15:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:19.031 10:15:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:19.289 10:15:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:19.546 [2024-07-14 10:15:04.280575] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.546 [2024-07-14 10:15:04.317692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.546 [2024-07-14 10:15:04.317692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.546 [2024-07-14 10:15:04.358652] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:19.546 [2024-07-14 10:15:04.358692] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:22.878 10:15:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:22.878 10:15:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:22.878 spdk_app_start Round 2 00:06:22.878 10:15:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2212704 /var/tmp/spdk-nbd.sock 00:06:22.878 10:15:07 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2212704 ']' 00:06:22.878 10:15:07 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:22.878 10:15:07 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.878 10:15:07 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:22.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:22.878 10:15:07 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.878 10:15:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:22.878 10:15:07 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.878 10:15:07 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:22.878 10:15:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.878 Malloc0 00:06:22.878 10:15:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.878 Malloc1 00:06:22.878 10:15:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.878 10:15:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.878 10:15:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.878 10:15:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:22.878 10:15:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.878 10:15:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:22.878 10:15:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.878 10:15:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.878 10:15:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.879 10:15:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:22.879 10:15:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.879 10:15:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:22.879 10:15:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:22.879 10:15:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:22.879 10:15:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.879 10:15:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:22.879 /dev/nbd0 00:06:23.136 10:15:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:23.136 10:15:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:23.136 10:15:07 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:23.136 10:15:07 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:23.136 10:15:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:23.136 10:15:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:23.136 10:15:07 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:23.137 10:15:07 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:23.137 10:15:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:23.137 10:15:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:23.137 10:15:07 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.137 1+0 records in 00:06:23.137 1+0 records out 00:06:23.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195264 s, 21.0 MB/s 00:06:23.137 10:15:07 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.137 10:15:07 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:23.137 10:15:07 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.137 10:15:07 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:23.137 10:15:07 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:23.137 10:15:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.137 10:15:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.137 10:15:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:23.137 /dev/nbd1 00:06:23.137 10:15:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:23.137 10:15:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:23.137 10:15:08 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:23.137 10:15:08 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:23.137 10:15:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:23.137 10:15:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:23.137 10:15:08 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:23.137 10:15:08 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:23.137 10:15:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:23.137 10:15:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:23.137 10:15:08 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.137 1+0 records in 00:06:23.137 1+0 records out 00:06:23.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223502 s, 18.3 MB/s 00:06:23.137 10:15:08 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.137 10:15:08 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:23.137 10:15:08 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.137 10:15:08 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:23.137 10:15:08 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:23.137 10:15:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.137 10:15:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.137 10:15:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.137 10:15:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.137 10:15:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.395 10:15:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:23.395 { 00:06:23.395 "nbd_device": "/dev/nbd0", 00:06:23.395 "bdev_name": "Malloc0" 00:06:23.395 }, 00:06:23.395 { 00:06:23.395 "nbd_device": "/dev/nbd1", 00:06:23.395 "bdev_name": "Malloc1" 00:06:23.395 } 00:06:23.395 ]' 00:06:23.395 10:15:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:23.395 { 00:06:23.395 "nbd_device": "/dev/nbd0", 00:06:23.395 "bdev_name": "Malloc0" 00:06:23.395 }, 00:06:23.395 { 00:06:23.395 "nbd_device": "/dev/nbd1", 00:06:23.395 "bdev_name": "Malloc1" 00:06:23.395 } 00:06:23.395 ]' 00:06:23.395 10:15:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.395 10:15:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:23.395 /dev/nbd1' 00:06:23.395 10:15:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:23.395 /dev/nbd1' 00:06:23.395 10:15:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.395 10:15:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:23.395 10:15:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:23.395 10:15:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:23.395 10:15:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:23.395 10:15:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:23.395 10:15:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.395 10:15:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.395 10:15:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:23.395 10:15:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.395 10:15:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:23.395 10:15:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:23.395 256+0 records in 00:06:23.395 256+0 records out 00:06:23.395 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103581 s, 101 MB/s 00:06:23.395 10:15:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.395 10:15:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:23.395 256+0 records in 00:06:23.395 256+0 records out 00:06:23.395 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139775 s, 75.0 MB/s 00:06:23.395 10:15:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.395 10:15:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:23.654 256+0 records in 00:06:23.654 256+0 records out 00:06:23.654 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140962 s, 74.4 MB/s 00:06:23.654 10:15:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:23.654 10:15:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.654 10:15:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.654 10:15:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:23.654 10:15:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.654 10:15:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:23.654 10:15:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:23.654 10:15:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.654 10:15:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:23.654 10:15:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.654 10:15:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:23.654 10:15:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.654 10:15:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:23.654 10:15:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.654 10:15:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.654 10:15:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:23.654 10:15:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:23.654 10:15:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.654 10:15:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:23.654 10:15:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:23.654 10:15:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:23.654 10:15:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:23.654 10:15:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.655 10:15:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.655 10:15:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:23.655 10:15:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.655 10:15:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.655 10:15:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.655 10:15:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:23.913 10:15:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:23.913 10:15:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:23.913 10:15:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:23.913 10:15:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.913 10:15:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.913 10:15:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:23.913 10:15:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.913 10:15:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.913 10:15:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.913 10:15:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.913 10:15:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.173 10:15:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:24.173 10:15:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:24.173 10:15:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.173 10:15:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:24.173 10:15:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:24.173 10:15:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.173 10:15:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:24.173 10:15:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:24.173 10:15:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:24.173 10:15:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:24.173 10:15:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:24.173 10:15:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:24.173 10:15:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:24.432 10:15:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:24.692 [2024-07-14 10:15:09.432639] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.692 [2024-07-14 10:15:09.469563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.692 [2024-07-14 10:15:09.469563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.692 [2024-07-14 10:15:09.510412] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:24.692 [2024-07-14 10:15:09.510452] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:27.983 10:15:12 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2212704 /var/tmp/spdk-nbd.sock 00:06:27.983 10:15:12 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2212704 ']' 00:06:27.983 10:15:12 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:27.983 10:15:12 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.983 10:15:12 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:27.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:27.983 10:15:12 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.983 10:15:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:27.983 10:15:12 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.983 10:15:12 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:27.983 10:15:12 event.app_repeat -- event/event.sh@39 -- # killprocess 2212704 00:06:27.983 10:15:12 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2212704 ']' 00:06:27.983 10:15:12 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2212704 00:06:27.983 10:15:12 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:27.983 10:15:12 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:27.983 10:15:12 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2212704 00:06:27.983 10:15:12 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:27.983 10:15:12 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:27.983 10:15:12 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2212704' 00:06:27.983 killing process with pid 2212704 00:06:27.983 10:15:12 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2212704 00:06:27.983 10:15:12 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2212704 00:06:27.983 spdk_app_start is called in Round 0. 00:06:27.983 Shutdown signal received, stop current app iteration 00:06:27.983 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 reinitialization... 00:06:27.983 spdk_app_start is called in Round 1. 00:06:27.983 Shutdown signal received, stop current app iteration 00:06:27.983 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 reinitialization... 00:06:27.983 spdk_app_start is called in Round 2. 00:06:27.983 Shutdown signal received, stop current app iteration 00:06:27.983 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 reinitialization... 00:06:27.983 spdk_app_start is called in Round 3. 00:06:27.983 Shutdown signal received, stop current app iteration 00:06:27.983 10:15:12 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:27.983 10:15:12 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:27.983 00:06:27.983 real 0m15.873s 00:06:27.983 user 0m34.602s 00:06:27.983 sys 0m2.454s 00:06:27.983 10:15:12 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.983 10:15:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:27.983 ************************************ 00:06:27.983 END TEST app_repeat 00:06:27.983 ************************************ 00:06:27.983 10:15:12 event -- common/autotest_common.sh@1142 -- # return 0 00:06:27.983 10:15:12 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:27.983 10:15:12 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:27.983 10:15:12 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.983 10:15:12 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.983 10:15:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.983 ************************************ 00:06:27.983 START TEST cpu_locks 00:06:27.983 ************************************ 00:06:27.983 10:15:12 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:27.983 * Looking for test storage... 00:06:27.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:27.983 10:15:12 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:27.983 10:15:12 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:27.983 10:15:12 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:27.984 10:15:12 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:27.984 10:15:12 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.984 10:15:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.984 10:15:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.984 ************************************ 00:06:27.984 START TEST default_locks 00:06:27.984 ************************************ 00:06:27.984 10:15:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:27.984 10:15:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2216207 00:06:27.984 10:15:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2216207 00:06:27.984 10:15:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.984 10:15:12 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2216207 ']' 00:06:27.984 10:15:12 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.984 10:15:12 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.984 10:15:12 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.984 10:15:12 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.984 10:15:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.984 [2024-07-14 10:15:12.907401] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:27.984 [2024-07-14 10:15:12.907451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2216207 ] 00:06:27.984 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.242 [2024-07-14 10:15:12.976811] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.242 [2024-07-14 10:15:13.017666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.809 10:15:13 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.809 10:15:13 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:28.809 10:15:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2216207 00:06:28.809 10:15:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2216207 00:06:28.809 10:15:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.067 lslocks: write error 00:06:29.067 10:15:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2216207 00:06:29.067 10:15:13 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 2216207 ']' 00:06:29.067 10:15:13 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 2216207 00:06:29.067 10:15:13 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:29.067 10:15:13 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:29.067 10:15:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2216207 00:06:29.067 10:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:29.067 10:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:29.067 10:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2216207' 00:06:29.067 killing process with pid 2216207 00:06:29.067 10:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 2216207 00:06:29.067 10:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 2216207 00:06:29.636 10:15:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2216207 00:06:29.636 10:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:29.636 10:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2216207 00:06:29.636 10:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:29.636 10:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.636 10:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:29.636 10:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.636 10:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2216207 00:06:29.636 10:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2216207 ']' 00:06:29.636 10:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.636 10:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.636 10:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.636 10:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.636 10:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2216207) - No such process 00:06:29.636 ERROR: process (pid: 2216207) is no longer running 00:06:29.636 10:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.636 10:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:29.636 10:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:29.636 10:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:29.636 10:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:29.636 10:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:29.636 10:15:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:29.636 10:15:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:29.636 10:15:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:29.636 10:15:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:29.636 00:06:29.636 real 0m1.487s 00:06:29.636 user 0m1.554s 00:06:29.636 sys 0m0.499s 00:06:29.636 10:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.636 10:15:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.636 ************************************ 00:06:29.636 END TEST default_locks 00:06:29.636 ************************************ 00:06:29.636 10:15:14 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:29.636 10:15:14 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:29.636 10:15:14 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.636 10:15:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.636 10:15:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.636 ************************************ 00:06:29.636 START TEST default_locks_via_rpc 00:06:29.636 ************************************ 00:06:29.636 10:15:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:29.636 10:15:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2216465 00:06:29.636 10:15:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2216465 00:06:29.636 10:15:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.636 10:15:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2216465 ']' 00:06:29.636 10:15:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.636 10:15:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.636 10:15:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.636 10:15:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.636 10:15:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.636 [2024-07-14 10:15:14.463466] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:29.636 [2024-07-14 10:15:14.463510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2216465 ] 00:06:29.636 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.636 [2024-07-14 10:15:14.532052] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.636 [2024-07-14 10:15:14.569377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.895 10:15:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.895 10:15:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:29.895 10:15:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:29.895 10:15:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.895 10:15:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.895 10:15:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.895 10:15:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:29.895 10:15:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:29.895 10:15:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:29.895 10:15:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:29.895 10:15:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:29.895 10:15:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.895 10:15:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.895 10:15:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.895 10:15:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2216465 00:06:29.895 10:15:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2216465 00:06:29.895 10:15:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.154 10:15:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2216465 00:06:30.154 10:15:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 2216465 ']' 00:06:30.154 10:15:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 2216465 00:06:30.154 10:15:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:30.154 10:15:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:30.154 10:15:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2216465 00:06:30.412 10:15:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:30.412 10:15:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:30.412 10:15:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2216465' 00:06:30.412 killing process with pid 2216465 00:06:30.412 10:15:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 2216465 00:06:30.412 10:15:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 2216465 00:06:30.671 00:06:30.671 real 0m1.044s 00:06:30.671 user 0m0.976s 00:06:30.671 sys 0m0.493s 00:06:30.671 10:15:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.671 10:15:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.671 ************************************ 00:06:30.671 END TEST default_locks_via_rpc 00:06:30.671 ************************************ 00:06:30.671 10:15:15 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:30.671 10:15:15 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:30.671 10:15:15 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:30.671 10:15:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.671 10:15:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.671 ************************************ 00:06:30.671 START TEST non_locking_app_on_locked_coremask 00:06:30.671 ************************************ 00:06:30.671 10:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:30.671 10:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2216694 00:06:30.671 10:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2216694 /var/tmp/spdk.sock 00:06:30.671 10:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:30.671 10:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2216694 ']' 00:06:30.671 10:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.671 10:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.671 10:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.671 10:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.671 10:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.671 [2024-07-14 10:15:15.573302] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:30.671 [2024-07-14 10:15:15.573340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2216694 ] 00:06:30.671 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.671 [2024-07-14 10:15:15.642100] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.930 [2024-07-14 10:15:15.683020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.930 10:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.930 10:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:30.930 10:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2216733 00:06:30.930 10:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2216733 /var/tmp/spdk2.sock 00:06:30.930 10:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:30.930 10:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2216733 ']' 00:06:30.930 10:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.930 10:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.930 10:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.930 10:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.930 10:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.189 [2024-07-14 10:15:15.916416] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:31.189 [2024-07-14 10:15:15.916464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2216733 ] 00:06:31.189 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.189 [2024-07-14 10:15:15.987940] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.189 [2024-07-14 10:15:15.987961] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.189 [2024-07-14 10:15:16.067912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.756 10:15:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.756 10:15:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:31.756 10:15:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2216694 00:06:31.756 10:15:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2216694 00:06:31.756 10:15:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.693 lslocks: write error 00:06:32.693 10:15:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2216694 00:06:32.693 10:15:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2216694 ']' 00:06:32.693 10:15:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2216694 00:06:32.693 10:15:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:32.693 10:15:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:32.693 10:15:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2216694 00:06:32.693 10:15:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:32.693 10:15:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:32.693 10:15:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2216694' 00:06:32.693 killing process with pid 2216694 00:06:32.693 10:15:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2216694 00:06:32.693 10:15:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2216694 00:06:33.262 10:15:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2216733 00:06:33.262 10:15:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2216733 ']' 00:06:33.262 10:15:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2216733 00:06:33.262 10:15:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:33.262 10:15:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:33.262 10:15:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2216733 00:06:33.262 10:15:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:33.262 10:15:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:33.262 10:15:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2216733' 00:06:33.262 killing process with pid 2216733 00:06:33.262 10:15:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2216733 00:06:33.262 10:15:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2216733 00:06:33.522 00:06:33.522 real 0m2.783s 00:06:33.522 user 0m2.872s 00:06:33.522 sys 0m0.929s 00:06:33.522 10:15:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.522 10:15:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.522 ************************************ 00:06:33.522 END TEST non_locking_app_on_locked_coremask 00:06:33.522 ************************************ 00:06:33.522 10:15:18 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:33.522 10:15:18 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:33.522 10:15:18 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.522 10:15:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.522 10:15:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.522 ************************************ 00:06:33.522 START TEST locking_app_on_unlocked_coremask 00:06:33.522 ************************************ 00:06:33.522 10:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:33.522 10:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2217219 00:06:33.522 10:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2217219 /var/tmp/spdk.sock 00:06:33.522 10:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:33.522 10:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2217219 ']' 00:06:33.522 10:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.522 10:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.522 10:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.522 10:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.522 10:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.522 [2024-07-14 10:15:18.425209] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:33.522 [2024-07-14 10:15:18.425262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2217219 ] 00:06:33.522 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.522 [2024-07-14 10:15:18.492591] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.522 [2024-07-14 10:15:18.492615] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.781 [2024-07-14 10:15:18.533958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.349 10:15:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.349 10:15:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:34.349 10:15:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2217240 00:06:34.349 10:15:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2217240 /var/tmp/spdk2.sock 00:06:34.349 10:15:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:34.349 10:15:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2217240 ']' 00:06:34.349 10:15:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.349 10:15:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.349 10:15:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.349 10:15:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.349 10:15:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.349 [2024-07-14 10:15:19.268416] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:34.350 [2024-07-14 10:15:19.268463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2217240 ] 00:06:34.350 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.609 [2024-07-14 10:15:19.344341] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.609 [2024-07-14 10:15:19.420155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.179 10:15:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.179 10:15:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:35.179 10:15:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2217240 00:06:35.179 10:15:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2217240 00:06:35.179 10:15:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.748 lslocks: write error 00:06:35.748 10:15:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2217219 00:06:35.748 10:15:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2217219 ']' 00:06:35.748 10:15:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2217219 00:06:35.748 10:15:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:35.748 10:15:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:35.748 10:15:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2217219 00:06:35.748 10:15:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:35.748 10:15:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:35.748 10:15:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2217219' 00:06:35.748 killing process with pid 2217219 00:06:35.748 10:15:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2217219 00:06:35.748 10:15:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2217219 00:06:36.316 10:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2217240 00:06:36.316 10:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2217240 ']' 00:06:36.316 10:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2217240 00:06:36.316 10:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:36.316 10:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:36.316 10:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2217240 00:06:36.316 10:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:36.316 10:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:36.316 10:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2217240' 00:06:36.316 killing process with pid 2217240 00:06:36.316 10:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2217240 00:06:36.316 10:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2217240 00:06:36.575 00:06:36.575 real 0m3.127s 00:06:36.575 user 0m3.332s 00:06:36.575 sys 0m0.922s 00:06:36.575 10:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.575 10:15:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.575 ************************************ 00:06:36.575 END TEST locking_app_on_unlocked_coremask 00:06:36.575 ************************************ 00:06:36.575 10:15:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:36.575 10:15:21 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:36.575 10:15:21 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.575 10:15:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.575 10:15:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.834 ************************************ 00:06:36.834 START TEST locking_app_on_locked_coremask 00:06:36.834 ************************************ 00:06:36.834 10:15:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:36.834 10:15:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2217725 00:06:36.834 10:15:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2217725 /var/tmp/spdk.sock 00:06:36.834 10:15:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.834 10:15:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2217725 ']' 00:06:36.834 10:15:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.834 10:15:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.834 10:15:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.834 10:15:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.834 10:15:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.834 [2024-07-14 10:15:21.615712] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:36.834 [2024-07-14 10:15:21.615751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2217725 ] 00:06:36.834 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.834 [2024-07-14 10:15:21.680380] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.834 [2024-07-14 10:15:21.721045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.096 10:15:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.096 10:15:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:37.096 10:15:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2217737 00:06:37.096 10:15:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2217737 /var/tmp/spdk2.sock 00:06:37.096 10:15:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:37.096 10:15:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:37.096 10:15:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2217737 /var/tmp/spdk2.sock 00:06:37.096 10:15:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:37.096 10:15:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.096 10:15:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:37.096 10:15:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.096 10:15:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2217737 /var/tmp/spdk2.sock 00:06:37.096 10:15:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2217737 ']' 00:06:37.096 10:15:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.096 10:15:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.096 10:15:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.096 10:15:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.096 10:15:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.096 [2024-07-14 10:15:21.970248] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:37.096 [2024-07-14 10:15:21.970298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2217737 ] 00:06:37.096 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.096 [2024-07-14 10:15:22.042877] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2217725 has claimed it. 00:06:37.096 [2024-07-14 10:15:22.042907] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:37.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2217737) - No such process 00:06:37.696 ERROR: process (pid: 2217737) is no longer running 00:06:37.696 10:15:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.696 10:15:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:37.696 10:15:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:37.696 10:15:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:37.696 10:15:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:37.696 10:15:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:37.696 10:15:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2217725 00:06:37.696 10:15:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2217725 00:06:37.696 10:15:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.956 lslocks: write error 00:06:37.956 10:15:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2217725 00:06:37.956 10:15:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2217725 ']' 00:06:37.957 10:15:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2217725 00:06:37.957 10:15:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:37.957 10:15:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:37.957 10:15:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2217725 00:06:37.957 10:15:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:37.957 10:15:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:37.957 10:15:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2217725' 00:06:37.957 killing process with pid 2217725 00:06:37.957 10:15:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2217725 00:06:37.957 10:15:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2217725 00:06:38.524 00:06:38.524 real 0m1.670s 00:06:38.524 user 0m1.749s 00:06:38.524 sys 0m0.555s 00:06:38.524 10:15:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.524 10:15:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.524 ************************************ 00:06:38.524 END TEST locking_app_on_locked_coremask 00:06:38.524 ************************************ 00:06:38.524 10:15:23 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:38.524 10:15:23 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:38.524 10:15:23 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.524 10:15:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.524 10:15:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.524 ************************************ 00:06:38.524 START TEST locking_overlapped_coremask 00:06:38.524 ************************************ 00:06:38.524 10:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:38.524 10:15:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2217998 00:06:38.524 10:15:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2217998 /var/tmp/spdk.sock 00:06:38.524 10:15:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:38.524 10:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2217998 ']' 00:06:38.524 10:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.524 10:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.524 10:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.524 10:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.524 10:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.524 [2024-07-14 10:15:23.352153] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:38.524 [2024-07-14 10:15:23.352191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2217998 ] 00:06:38.524 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.524 [2024-07-14 10:15:23.421034] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.524 [2024-07-14 10:15:23.463380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.524 [2024-07-14 10:15:23.463486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.524 [2024-07-14 10:15:23.463486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.782 10:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.782 10:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:38.782 10:15:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2218100 00:06:38.782 10:15:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2218100 /var/tmp/spdk2.sock 00:06:38.782 10:15:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:38.782 10:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:38.782 10:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2218100 /var/tmp/spdk2.sock 00:06:38.782 10:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:38.782 10:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.782 10:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:38.782 10:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.782 10:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2218100 /var/tmp/spdk2.sock 00:06:38.782 10:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2218100 ']' 00:06:38.782 10:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.782 10:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.782 10:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.782 10:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.782 10:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.782 [2024-07-14 10:15:23.694433] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:38.782 [2024-07-14 10:15:23.694484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2218100 ] 00:06:38.782 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.041 [2024-07-14 10:15:23.772595] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2217998 has claimed it. 00:06:39.041 [2024-07-14 10:15:23.772630] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:39.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2218100) - No such process 00:06:39.610 ERROR: process (pid: 2218100) is no longer running 00:06:39.610 10:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.610 10:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:39.610 10:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:39.610 10:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:39.610 10:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:39.610 10:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:39.610 10:15:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:39.610 10:15:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:39.610 10:15:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:39.610 10:15:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:39.610 10:15:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2217998 00:06:39.610 10:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 2217998 ']' 00:06:39.610 10:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 2217998 00:06:39.610 10:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:39.610 10:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:39.610 10:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2217998 00:06:39.610 10:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:39.610 10:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:39.610 10:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2217998' 00:06:39.610 killing process with pid 2217998 00:06:39.610 10:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 2217998 00:06:39.610 10:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 2217998 00:06:39.870 00:06:39.870 real 0m1.377s 00:06:39.870 user 0m3.740s 00:06:39.870 sys 0m0.379s 00:06:39.870 10:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.870 10:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.870 ************************************ 00:06:39.870 END TEST locking_overlapped_coremask 00:06:39.870 ************************************ 00:06:39.870 10:15:24 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:39.870 10:15:24 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:39.870 10:15:24 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:39.870 10:15:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.870 10:15:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.870 ************************************ 00:06:39.870 START TEST locking_overlapped_coremask_via_rpc 00:06:39.870 ************************************ 00:06:39.870 10:15:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:39.870 10:15:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2218259 00:06:39.870 10:15:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2218259 /var/tmp/spdk.sock 00:06:39.870 10:15:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:39.870 10:15:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2218259 ']' 00:06:39.870 10:15:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.870 10:15:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.870 10:15:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.870 10:15:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.870 10:15:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.870 [2024-07-14 10:15:24.796266] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:39.870 [2024-07-14 10:15:24.796310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2218259 ] 00:06:39.870 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.129 [2024-07-14 10:15:24.865354] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.129 [2024-07-14 10:15:24.865381] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.129 [2024-07-14 10:15:24.904815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.129 [2024-07-14 10:15:24.904944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.129 [2024-07-14 10:15:24.904945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.129 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.129 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:40.129 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2218413 00:06:40.129 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2218413 /var/tmp/spdk2.sock 00:06:40.129 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:40.129 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2218413 ']' 00:06:40.129 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.129 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.129 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.129 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.129 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.388 [2024-07-14 10:15:25.153292] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:40.388 [2024-07-14 10:15:25.153346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2218413 ] 00:06:40.388 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.388 [2024-07-14 10:15:25.231124] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.388 [2024-07-14 10:15:25.231154] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.388 [2024-07-14 10:15:25.313068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.388 [2024-07-14 10:15:25.313185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.388 [2024-07-14 10:15:25.313185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:41.325 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.325 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:41.325 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:41.325 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.325 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.325 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.325 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.325 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:41.325 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.325 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:41.326 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:41.326 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:41.326 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:41.326 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.326 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.326 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.326 [2024-07-14 10:15:25.973293] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2218259 has claimed it. 00:06:41.326 request: 00:06:41.326 { 00:06:41.326 "method": "framework_enable_cpumask_locks", 00:06:41.326 "req_id": 1 00:06:41.326 } 00:06:41.326 Got JSON-RPC error response 00:06:41.326 response: 00:06:41.326 { 00:06:41.326 "code": -32603, 00:06:41.326 "message": "Failed to claim CPU core: 2" 00:06:41.326 } 00:06:41.326 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:41.326 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:41.326 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:41.326 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:41.326 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:41.326 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2218259 /var/tmp/spdk.sock 00:06:41.326 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2218259 ']' 00:06:41.326 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.326 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.326 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.326 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.326 10:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.326 10:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.326 10:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:41.326 10:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2218413 /var/tmp/spdk2.sock 00:06:41.326 10:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2218413 ']' 00:06:41.326 10:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.326 10:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.326 10:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.326 10:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.326 10:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.584 10:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.584 10:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:41.584 10:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:41.584 10:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:41.584 10:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:41.584 10:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:41.584 00:06:41.584 real 0m1.601s 00:06:41.584 user 0m0.737s 00:06:41.584 sys 0m0.144s 00:06:41.584 10:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.584 10:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.584 ************************************ 00:06:41.584 END TEST locking_overlapped_coremask_via_rpc 00:06:41.584 ************************************ 00:06:41.584 10:15:26 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:41.584 10:15:26 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:41.584 10:15:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2218259 ]] 00:06:41.584 10:15:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2218259 00:06:41.584 10:15:26 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2218259 ']' 00:06:41.585 10:15:26 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2218259 00:06:41.585 10:15:26 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:41.585 10:15:26 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:41.585 10:15:26 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2218259 00:06:41.585 10:15:26 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:41.585 10:15:26 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:41.585 10:15:26 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2218259' 00:06:41.585 killing process with pid 2218259 00:06:41.585 10:15:26 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2218259 00:06:41.585 10:15:26 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2218259 00:06:41.843 10:15:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2218413 ]] 00:06:41.843 10:15:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2218413 00:06:41.843 10:15:26 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2218413 ']' 00:06:41.843 10:15:26 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2218413 00:06:41.843 10:15:26 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:41.843 10:15:26 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:41.843 10:15:26 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2218413 00:06:41.843 10:15:26 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:41.843 10:15:26 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:41.843 10:15:26 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2218413' 00:06:41.843 killing process with pid 2218413 00:06:41.843 10:15:26 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2218413 00:06:41.843 10:15:26 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2218413 00:06:42.102 10:15:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:42.102 10:15:27 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:42.361 10:15:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2218259 ]] 00:06:42.361 10:15:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2218259 00:06:42.361 10:15:27 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2218259 ']' 00:06:42.361 10:15:27 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2218259 00:06:42.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2218259) - No such process 00:06:42.361 10:15:27 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2218259 is not found' 00:06:42.361 Process with pid 2218259 is not found 00:06:42.361 10:15:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2218413 ]] 00:06:42.361 10:15:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2218413 00:06:42.361 10:15:27 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2218413 ']' 00:06:42.361 10:15:27 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2218413 00:06:42.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2218413) - No such process 00:06:42.361 10:15:27 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2218413 is not found' 00:06:42.361 Process with pid 2218413 is not found 00:06:42.361 10:15:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:42.361 00:06:42.361 real 0m14.361s 00:06:42.361 user 0m24.080s 00:06:42.361 sys 0m4.829s 00:06:42.361 10:15:27 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.361 10:15:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.361 ************************************ 00:06:42.361 END TEST cpu_locks 00:06:42.361 ************************************ 00:06:42.361 10:15:27 event -- common/autotest_common.sh@1142 -- # return 0 00:06:42.361 00:06:42.361 real 0m38.012s 00:06:42.361 user 1m10.725s 00:06:42.361 sys 0m8.243s 00:06:42.361 10:15:27 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.361 10:15:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:42.361 ************************************ 00:06:42.361 END TEST event 00:06:42.361 ************************************ 00:06:42.361 10:15:27 -- common/autotest_common.sh@1142 -- # return 0 00:06:42.361 10:15:27 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:42.361 10:15:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.361 10:15:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.361 10:15:27 -- common/autotest_common.sh@10 -- # set +x 00:06:42.361 ************************************ 00:06:42.361 START TEST thread 00:06:42.361 ************************************ 00:06:42.361 10:15:27 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:42.361 * Looking for test storage... 00:06:42.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:42.361 10:15:27 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:42.361 10:15:27 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:42.361 10:15:27 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.361 10:15:27 thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.361 ************************************ 00:06:42.361 START TEST thread_poller_perf 00:06:42.361 ************************************ 00:06:42.361 10:15:27 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:42.361 [2024-07-14 10:15:27.340022] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:42.361 [2024-07-14 10:15:27.340083] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2218821 ] 00:06:42.620 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.620 [2024-07-14 10:15:27.412948] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.620 [2024-07-14 10:15:27.451969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.620 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:43.555 ====================================== 00:06:43.555 busy:2306649820 (cyc) 00:06:43.555 total_run_count: 407000 00:06:43.555 tsc_hz: 2300000000 (cyc) 00:06:43.555 ====================================== 00:06:43.555 poller_cost: 5667 (cyc), 2463 (nsec) 00:06:43.555 00:06:43.555 real 0m1.198s 00:06:43.555 user 0m1.106s 00:06:43.555 sys 0m0.088s 00:06:43.555 10:15:28 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.555 10:15:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:43.555 ************************************ 00:06:43.555 END TEST thread_poller_perf 00:06:43.555 ************************************ 00:06:43.814 10:15:28 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:43.814 10:15:28 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:43.814 10:15:28 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:43.814 10:15:28 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.814 10:15:28 thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.814 ************************************ 00:06:43.814 START TEST thread_poller_perf 00:06:43.814 ************************************ 00:06:43.814 10:15:28 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:43.814 [2024-07-14 10:15:28.606341] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:43.814 [2024-07-14 10:15:28.606411] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2219071 ] 00:06:43.814 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.814 [2024-07-14 10:15:28.679641] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.814 [2024-07-14 10:15:28.719873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.814 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:45.191 ====================================== 00:06:45.191 busy:2301666420 (cyc) 00:06:45.191 total_run_count: 5412000 00:06:45.191 tsc_hz: 2300000000 (cyc) 00:06:45.191 ====================================== 00:06:45.191 poller_cost: 425 (cyc), 184 (nsec) 00:06:45.191 00:06:45.191 real 0m1.195s 00:06:45.191 user 0m1.108s 00:06:45.191 sys 0m0.083s 00:06:45.191 10:15:29 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.191 10:15:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:45.191 ************************************ 00:06:45.191 END TEST thread_poller_perf 00:06:45.191 ************************************ 00:06:45.191 10:15:29 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:45.191 10:15:29 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:45.191 00:06:45.191 real 0m2.619s 00:06:45.191 user 0m2.302s 00:06:45.191 sys 0m0.325s 00:06:45.191 10:15:29 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.191 10:15:29 thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.191 ************************************ 00:06:45.191 END TEST thread 00:06:45.191 ************************************ 00:06:45.191 10:15:29 -- common/autotest_common.sh@1142 -- # return 0 00:06:45.191 10:15:29 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:45.191 10:15:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.191 10:15:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.191 10:15:29 -- common/autotest_common.sh@10 -- # set +x 00:06:45.191 ************************************ 00:06:45.191 START TEST accel 00:06:45.191 ************************************ 00:06:45.191 10:15:29 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:45.191 * Looking for test storage... 00:06:45.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:45.191 10:15:29 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:45.191 10:15:29 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:45.191 10:15:29 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:45.191 10:15:29 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2219364 00:06:45.191 10:15:29 accel -- accel/accel.sh@63 -- # waitforlisten 2219364 00:06:45.191 10:15:29 accel -- common/autotest_common.sh@829 -- # '[' -z 2219364 ']' 00:06:45.191 10:15:29 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.191 10:15:29 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:45.191 10:15:29 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.191 10:15:29 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:45.191 10:15:29 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.191 10:15:29 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.191 10:15:29 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.191 10:15:29 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.191 10:15:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.191 10:15:29 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.191 10:15:29 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.191 10:15:29 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.191 10:15:29 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:45.191 10:15:29 accel -- accel/accel.sh@41 -- # jq -r . 00:06:45.191 [2024-07-14 10:15:30.031317] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:45.191 [2024-07-14 10:15:30.031368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2219364 ] 00:06:45.191 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.191 [2024-07-14 10:15:30.099368] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.191 [2024-07-14 10:15:30.140063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.450 10:15:30 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.450 10:15:30 accel -- common/autotest_common.sh@862 -- # return 0 00:06:45.450 10:15:30 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:45.450 10:15:30 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:45.450 10:15:30 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:45.450 10:15:30 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:45.450 10:15:30 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:45.450 10:15:30 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:45.450 10:15:30 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:45.450 10:15:30 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.450 10:15:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.450 10:15:30 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.450 10:15:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:45.450 10:15:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:45.450 10:15:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:45.450 10:15:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:45.450 10:15:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:45.450 10:15:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:45.450 10:15:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:45.450 10:15:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:45.450 10:15:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:45.450 10:15:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:45.450 10:15:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:45.450 10:15:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:45.450 10:15:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:45.450 10:15:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:45.450 10:15:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:45.450 10:15:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:45.450 10:15:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:45.450 10:15:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:45.450 10:15:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:45.450 10:15:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:45.450 10:15:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:45.450 10:15:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:45.450 10:15:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:45.450 10:15:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:45.450 10:15:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:45.450 10:15:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:45.450 10:15:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:45.450 10:15:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:45.450 10:15:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:45.450 10:15:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:45.450 10:15:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:45.450 10:15:30 accel -- accel/accel.sh@75 -- # killprocess 2219364 00:06:45.451 10:15:30 accel -- common/autotest_common.sh@948 -- # '[' -z 2219364 ']' 00:06:45.451 10:15:30 accel -- common/autotest_common.sh@952 -- # kill -0 2219364 00:06:45.451 10:15:30 accel -- common/autotest_common.sh@953 -- # uname 00:06:45.451 10:15:30 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:45.451 10:15:30 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2219364 00:06:45.709 10:15:30 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:45.709 10:15:30 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:45.709 10:15:30 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2219364' 00:06:45.709 killing process with pid 2219364 00:06:45.709 10:15:30 accel -- common/autotest_common.sh@967 -- # kill 2219364 00:06:45.709 10:15:30 accel -- common/autotest_common.sh@972 -- # wait 2219364 00:06:45.969 10:15:30 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:45.969 10:15:30 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:45.969 10:15:30 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:45.969 10:15:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.969 10:15:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.969 10:15:30 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:45.969 10:15:30 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:45.969 10:15:30 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:45.969 10:15:30 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.969 10:15:30 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.969 10:15:30 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.969 10:15:30 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.969 10:15:30 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.969 10:15:30 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:45.969 10:15:30 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:45.969 10:15:30 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.969 10:15:30 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:45.969 10:15:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:45.969 10:15:30 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:45.969 10:15:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:45.969 10:15:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.969 10:15:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.969 ************************************ 00:06:45.969 START TEST accel_missing_filename 00:06:45.969 ************************************ 00:06:45.969 10:15:30 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:45.969 10:15:30 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:45.969 10:15:30 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:45.969 10:15:30 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:45.969 10:15:30 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.969 10:15:30 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:45.969 10:15:30 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.969 10:15:30 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:45.969 10:15:30 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:45.969 10:15:30 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:45.969 10:15:30 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.969 10:15:30 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.969 10:15:30 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.969 10:15:30 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.969 10:15:30 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.969 10:15:30 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:45.969 10:15:30 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:45.969 [2024-07-14 10:15:30.899092] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:45.969 [2024-07-14 10:15:30.899161] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2219620 ] 00:06:45.969 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.228 [2024-07-14 10:15:30.969723] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.228 [2024-07-14 10:15:31.011208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.228 [2024-07-14 10:15:31.052028] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.228 [2024-07-14 10:15:31.112045] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:46.228 A filename is required. 00:06:46.228 10:15:31 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:46.228 10:15:31 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:46.228 10:15:31 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:46.228 10:15:31 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:46.228 10:15:31 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:46.228 10:15:31 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:46.228 00:06:46.228 real 0m0.307s 00:06:46.228 user 0m0.222s 00:06:46.228 sys 0m0.126s 00:06:46.228 10:15:31 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.228 10:15:31 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:46.228 ************************************ 00:06:46.228 END TEST accel_missing_filename 00:06:46.228 ************************************ 00:06:46.228 10:15:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:46.228 10:15:31 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:46.228 10:15:31 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:46.228 10:15:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.228 10:15:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.486 ************************************ 00:06:46.486 START TEST accel_compress_verify 00:06:46.486 ************************************ 00:06:46.486 10:15:31 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:46.486 10:15:31 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:46.486 10:15:31 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:46.486 10:15:31 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:46.486 10:15:31 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.486 10:15:31 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:46.486 10:15:31 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.486 10:15:31 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:46.486 10:15:31 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:46.486 10:15:31 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:46.486 10:15:31 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.486 10:15:31 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.486 10:15:31 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.486 10:15:31 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.486 10:15:31 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.486 10:15:31 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:46.486 10:15:31 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:46.486 [2024-07-14 10:15:31.274420] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:46.486 [2024-07-14 10:15:31.274488] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2219649 ] 00:06:46.486 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.486 [2024-07-14 10:15:31.343263] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.486 [2024-07-14 10:15:31.385562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.486 [2024-07-14 10:15:31.428055] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.746 [2024-07-14 10:15:31.486822] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:46.746 00:06:46.746 Compression does not support the verify option, aborting. 00:06:46.746 10:15:31 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:46.746 10:15:31 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:46.746 10:15:31 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:46.746 10:15:31 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:46.746 10:15:31 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:46.746 10:15:31 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:46.746 00:06:46.746 real 0m0.306s 00:06:46.746 user 0m0.213s 00:06:46.746 sys 0m0.132s 00:06:46.746 10:15:31 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.746 10:15:31 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:46.746 ************************************ 00:06:46.746 END TEST accel_compress_verify 00:06:46.746 ************************************ 00:06:46.746 10:15:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:46.746 10:15:31 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:46.746 10:15:31 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:46.746 10:15:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.746 10:15:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.746 ************************************ 00:06:46.746 START TEST accel_wrong_workload 00:06:46.746 ************************************ 00:06:46.746 10:15:31 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:46.746 10:15:31 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:46.746 10:15:31 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:46.746 10:15:31 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:46.746 10:15:31 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.746 10:15:31 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:46.746 10:15:31 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.746 10:15:31 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:46.746 10:15:31 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:46.746 10:15:31 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:46.746 10:15:31 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.746 10:15:31 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.746 10:15:31 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.746 10:15:31 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.746 10:15:31 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.746 10:15:31 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:46.746 10:15:31 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:46.746 Unsupported workload type: foobar 00:06:46.746 [2024-07-14 10:15:31.643828] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:46.746 accel_perf options: 00:06:46.746 [-h help message] 00:06:46.746 [-q queue depth per core] 00:06:46.746 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:46.746 [-T number of threads per core 00:06:46.746 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:46.746 [-t time in seconds] 00:06:46.746 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:46.746 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:46.746 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:46.746 [-l for compress/decompress workloads, name of uncompressed input file 00:06:46.746 [-S for crc32c workload, use this seed value (default 0) 00:06:46.746 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:46.746 [-f for fill workload, use this BYTE value (default 255) 00:06:46.746 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:46.746 [-y verify result if this switch is on] 00:06:46.746 [-a tasks to allocate per core (default: same value as -q)] 00:06:46.746 Can be used to spread operations across a wider range of memory. 00:06:46.746 10:15:31 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:46.746 10:15:31 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:46.746 10:15:31 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:46.746 10:15:31 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:46.746 00:06:46.746 real 0m0.033s 00:06:46.746 user 0m0.020s 00:06:46.746 sys 0m0.013s 00:06:46.746 10:15:31 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.746 10:15:31 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:46.746 ************************************ 00:06:46.746 END TEST accel_wrong_workload 00:06:46.746 ************************************ 00:06:46.746 Error: writing output failed: Broken pipe 00:06:46.746 10:15:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:46.746 10:15:31 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:46.746 10:15:31 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:46.746 10:15:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.746 10:15:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.746 ************************************ 00:06:46.746 START TEST accel_negative_buffers 00:06:46.746 ************************************ 00:06:46.746 10:15:31 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:46.746 10:15:31 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:46.746 10:15:31 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:46.746 10:15:31 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:46.746 10:15:31 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.746 10:15:31 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:46.746 10:15:31 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.746 10:15:31 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:46.746 10:15:31 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:46.746 10:15:31 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:46.746 10:15:31 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.747 10:15:31 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.747 10:15:31 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.747 10:15:31 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.747 10:15:31 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.747 10:15:31 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:46.747 10:15:31 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:47.006 -x option must be non-negative. 00:06:47.006 [2024-07-14 10:15:31.746428] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:47.006 accel_perf options: 00:06:47.006 [-h help message] 00:06:47.006 [-q queue depth per core] 00:06:47.006 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:47.006 [-T number of threads per core 00:06:47.006 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:47.006 [-t time in seconds] 00:06:47.006 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:47.006 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:47.006 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:47.006 [-l for compress/decompress workloads, name of uncompressed input file 00:06:47.006 [-S for crc32c workload, use this seed value (default 0) 00:06:47.006 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:47.006 [-f for fill workload, use this BYTE value (default 255) 00:06:47.006 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:47.006 [-y verify result if this switch is on] 00:06:47.006 [-a tasks to allocate per core (default: same value as -q)] 00:06:47.006 Can be used to spread operations across a wider range of memory. 00:06:47.006 10:15:31 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:47.006 10:15:31 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:47.006 10:15:31 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:47.006 10:15:31 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:47.006 00:06:47.006 real 0m0.033s 00:06:47.006 user 0m0.020s 00:06:47.006 sys 0m0.013s 00:06:47.006 10:15:31 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.006 10:15:31 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:47.006 ************************************ 00:06:47.006 END TEST accel_negative_buffers 00:06:47.006 ************************************ 00:06:47.006 Error: writing output failed: Broken pipe 00:06:47.006 10:15:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:47.006 10:15:31 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:47.006 10:15:31 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:47.006 10:15:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.006 10:15:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.006 ************************************ 00:06:47.006 START TEST accel_crc32c 00:06:47.006 ************************************ 00:06:47.006 10:15:31 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:47.006 10:15:31 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:47.006 10:15:31 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:47.006 10:15:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.006 10:15:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.006 10:15:31 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:47.006 10:15:31 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:47.006 10:15:31 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:47.006 10:15:31 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.006 10:15:31 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.006 10:15:31 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.006 10:15:31 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.006 10:15:31 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.006 10:15:31 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:47.006 10:15:31 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:47.006 [2024-07-14 10:15:31.843476] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:47.006 [2024-07-14 10:15:31.843524] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2219755 ] 00:06:47.006 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.006 [2024-07-14 10:15:31.911653] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.007 [2024-07-14 10:15:31.952391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.266 10:15:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.266 10:15:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.266 10:15:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.266 10:15:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.266 10:15:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.266 10:15:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.266 10:15:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.266 10:15:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.266 10:15:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:47.266 10:15:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.266 10:15:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.266 10:15:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.266 10:15:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.266 10:15:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.266 10:15:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.266 10:15:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.266 10:15:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.266 10:15:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.266 10:15:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.266 10:15:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.266 10:15:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:47.266 10:15:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.266 10:15:31 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:47.266 10:15:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.266 10:15:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.266 10:15:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:47.266 10:15:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.266 10:15:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.266 10:15:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.266 10:15:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.266 10:15:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.266 10:15:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.266 10:15:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.266 10:15:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.266 10:15:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.266 10:15:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.266 10:15:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.267 10:15:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.205 10:15:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.205 10:15:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.205 10:15:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.205 10:15:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.205 10:15:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.205 10:15:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.205 10:15:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.205 10:15:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.205 10:15:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.205 10:15:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.205 10:15:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.205 10:15:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.205 10:15:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.205 10:15:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.205 10:15:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.205 10:15:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.205 10:15:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.205 10:15:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.205 10:15:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.205 10:15:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.205 10:15:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.205 10:15:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.205 10:15:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.205 10:15:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.205 10:15:33 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.205 10:15:33 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:48.205 10:15:33 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.205 00:06:48.205 real 0m1.309s 00:06:48.205 user 0m1.193s 00:06:48.205 sys 0m0.129s 00:06:48.205 10:15:33 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.205 10:15:33 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:48.205 ************************************ 00:06:48.205 END TEST accel_crc32c 00:06:48.205 ************************************ 00:06:48.205 10:15:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.205 10:15:33 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:48.205 10:15:33 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:48.205 10:15:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.205 10:15:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.463 ************************************ 00:06:48.463 START TEST accel_crc32c_C2 00:06:48.463 ************************************ 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:48.463 [2024-07-14 10:15:33.218810] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:48.463 [2024-07-14 10:15:33.218857] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2220003 ] 00:06:48.463 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.463 [2024-07-14 10:15:33.285289] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.463 [2024-07-14 10:15:33.325339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.463 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.464 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.464 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.464 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.464 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.464 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.464 10:15:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.841 10:15:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:49.841 10:15:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.841 10:15:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.841 10:15:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.841 10:15:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:49.841 10:15:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.841 10:15:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.841 10:15:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.841 10:15:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:49.841 10:15:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.841 10:15:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.841 10:15:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.841 10:15:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:49.841 10:15:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.841 10:15:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.841 10:15:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.841 10:15:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:49.841 10:15:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.841 10:15:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.841 10:15:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.841 10:15:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:49.841 10:15:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.841 10:15:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.841 10:15:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.841 10:15:34 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.841 10:15:34 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:49.841 10:15:34 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.841 00:06:49.841 real 0m1.305s 00:06:49.841 user 0m1.196s 00:06:49.841 sys 0m0.123s 00:06:49.841 10:15:34 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.841 10:15:34 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:49.841 ************************************ 00:06:49.841 END TEST accel_crc32c_C2 00:06:49.841 ************************************ 00:06:49.841 10:15:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:49.841 10:15:34 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:49.841 10:15:34 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:49.841 10:15:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.841 10:15:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.841 ************************************ 00:06:49.841 START TEST accel_copy 00:06:49.841 ************************************ 00:06:49.841 10:15:34 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:49.841 [2024-07-14 10:15:34.593262] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:49.841 [2024-07-14 10:15:34.593310] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2220243 ] 00:06:49.841 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.841 [2024-07-14 10:15:34.660682] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.841 [2024-07-14 10:15:34.700506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.841 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.842 10:15:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.220 10:15:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.220 10:15:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.220 10:15:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.220 10:15:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.220 10:15:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.220 10:15:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.220 10:15:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.220 10:15:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.220 10:15:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.220 10:15:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.220 10:15:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.220 10:15:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.220 10:15:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.220 10:15:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.220 10:15:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.220 10:15:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.220 10:15:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.220 10:15:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.220 10:15:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.220 10:15:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.220 10:15:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.220 10:15:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.220 10:15:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.220 10:15:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.220 10:15:35 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.220 10:15:35 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:51.220 10:15:35 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.220 00:06:51.220 real 0m1.306s 00:06:51.220 user 0m1.196s 00:06:51.220 sys 0m0.124s 00:06:51.220 10:15:35 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.220 10:15:35 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:51.220 ************************************ 00:06:51.220 END TEST accel_copy 00:06:51.220 ************************************ 00:06:51.220 10:15:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:51.220 10:15:35 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:51.220 10:15:35 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:51.220 10:15:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.220 10:15:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.220 ************************************ 00:06:51.220 START TEST accel_fill 00:06:51.220 ************************************ 00:06:51.220 10:15:35 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:51.220 10:15:35 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:51.220 10:15:35 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:51.221 10:15:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.221 10:15:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.221 10:15:35 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:51.221 10:15:35 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:51.221 10:15:35 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:51.221 10:15:35 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.221 10:15:35 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.221 10:15:35 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.221 10:15:35 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.221 10:15:35 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.221 10:15:35 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:51.221 10:15:35 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:51.221 [2024-07-14 10:15:35.969094] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:51.221 [2024-07-14 10:15:35.969161] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2220484 ] 00:06:51.221 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.221 [2024-07-14 10:15:36.037156] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.221 [2024-07-14 10:15:36.078114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.221 10:15:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:52.598 10:15:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:52.598 10:15:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:52.598 10:15:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:52.598 10:15:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:52.598 10:15:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:52.598 10:15:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:52.598 10:15:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:52.598 10:15:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:52.598 10:15:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:52.598 10:15:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:52.598 10:15:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:52.598 10:15:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:52.598 10:15:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:52.598 10:15:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:52.598 10:15:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:52.598 10:15:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:52.598 10:15:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:52.598 10:15:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:52.598 10:15:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:52.598 10:15:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:52.598 10:15:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:52.598 10:15:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:52.598 10:15:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:52.598 10:15:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:52.598 10:15:37 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.598 10:15:37 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:52.598 10:15:37 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.598 00:06:52.598 real 0m1.309s 00:06:52.598 user 0m1.195s 00:06:52.598 sys 0m0.127s 00:06:52.598 10:15:37 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.598 10:15:37 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:52.598 ************************************ 00:06:52.598 END TEST accel_fill 00:06:52.598 ************************************ 00:06:52.598 10:15:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:52.598 10:15:37 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:52.598 10:15:37 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:52.598 10:15:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.598 10:15:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.598 ************************************ 00:06:52.598 START TEST accel_copy_crc32c 00:06:52.598 ************************************ 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:52.598 [2024-07-14 10:15:37.344087] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:52.598 [2024-07-14 10:15:37.344157] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2220738 ] 00:06:52.598 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.598 [2024-07-14 10:15:37.411370] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.598 [2024-07-14 10:15:37.450862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.598 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.599 10:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.035 10:15:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.035 10:15:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.035 10:15:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.035 10:15:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.035 10:15:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.035 10:15:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.035 10:15:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.035 10:15:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.035 10:15:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.035 10:15:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.035 10:15:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.035 10:15:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.035 10:15:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.035 10:15:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.035 10:15:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.035 10:15:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.035 10:15:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.035 10:15:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.035 10:15:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.035 10:15:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.035 10:15:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.035 10:15:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.035 10:15:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.035 10:15:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.035 10:15:38 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.035 10:15:38 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:54.035 10:15:38 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.035 00:06:54.035 real 0m1.306s 00:06:54.035 user 0m1.197s 00:06:54.035 sys 0m0.124s 00:06:54.035 10:15:38 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.035 10:15:38 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:54.035 ************************************ 00:06:54.035 END TEST accel_copy_crc32c 00:06:54.035 ************************************ 00:06:54.035 10:15:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:54.035 10:15:38 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:54.035 10:15:38 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:54.035 10:15:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.035 10:15:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.035 ************************************ 00:06:54.035 START TEST accel_copy_crc32c_C2 00:06:54.035 ************************************ 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:54.035 [2024-07-14 10:15:38.720836] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:54.035 [2024-07-14 10:15:38.720906] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2220982 ] 00:06:54.035 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.035 [2024-07-14 10:15:38.790221] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.035 [2024-07-14 10:15:38.834659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.035 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.036 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.036 10:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.410 10:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.410 10:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.410 10:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.410 10:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.410 10:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.410 10:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.410 10:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.410 10:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.410 10:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.411 10:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.411 10:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.411 10:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.411 10:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.411 10:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.411 10:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.411 10:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.411 10:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.411 10:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.411 10:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.411 10:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.411 10:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.411 10:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.411 10:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.411 10:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.411 10:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.411 10:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:55.411 10:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.411 00:06:55.411 real 0m1.317s 00:06:55.411 user 0m1.206s 00:06:55.411 sys 0m0.126s 00:06:55.411 10:15:40 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.411 10:15:40 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:55.411 ************************************ 00:06:55.411 END TEST accel_copy_crc32c_C2 00:06:55.411 ************************************ 00:06:55.411 10:15:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:55.411 10:15:40 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:55.411 10:15:40 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:55.411 10:15:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.411 10:15:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.411 ************************************ 00:06:55.411 START TEST accel_dualcast 00:06:55.411 ************************************ 00:06:55.411 10:15:40 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:55.411 [2024-07-14 10:15:40.098992] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:55.411 [2024-07-14 10:15:40.099038] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2221234 ] 00:06:55.411 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.411 [2024-07-14 10:15:40.166208] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.411 [2024-07-14 10:15:40.205632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.411 10:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:56.787 10:15:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:56.787 10:15:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:56.787 10:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:56.787 10:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:56.787 10:15:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:56.787 10:15:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:56.787 10:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:56.787 10:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:56.787 10:15:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:56.787 10:15:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:56.787 10:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:56.787 10:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:56.787 10:15:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:56.787 10:15:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:56.787 10:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:56.787 10:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:56.787 10:15:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:56.787 10:15:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:56.787 10:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:56.787 10:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:56.787 10:15:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:56.787 10:15:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:56.787 10:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:56.787 10:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:56.787 10:15:41 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.787 10:15:41 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:56.787 10:15:41 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.787 00:06:56.787 real 0m1.302s 00:06:56.787 user 0m1.199s 00:06:56.787 sys 0m0.116s 00:06:56.787 10:15:41 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.787 10:15:41 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:56.787 ************************************ 00:06:56.787 END TEST accel_dualcast 00:06:56.787 ************************************ 00:06:56.787 10:15:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:56.787 10:15:41 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:56.787 10:15:41 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:56.787 10:15:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.787 10:15:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.787 ************************************ 00:06:56.787 START TEST accel_compare 00:06:56.787 ************************************ 00:06:56.787 10:15:41 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:56.787 10:15:41 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:56.787 10:15:41 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:56.787 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.787 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.787 10:15:41 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:56.787 10:15:41 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:56.787 10:15:41 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:56.787 10:15:41 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.787 10:15:41 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.787 10:15:41 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.787 10:15:41 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.787 10:15:41 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:56.788 [2024-07-14 10:15:41.472896] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:56.788 [2024-07-14 10:15:41.472961] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2221481 ] 00:06:56.788 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.788 [2024-07-14 10:15:41.540653] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.788 [2024-07-14 10:15:41.580044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.788 10:15:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.166 10:15:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.166 10:15:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.166 10:15:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.166 10:15:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.166 10:15:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.166 10:15:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.166 10:15:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.166 10:15:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.166 10:15:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.166 10:15:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.166 10:15:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.166 10:15:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.166 10:15:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.166 10:15:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.166 10:15:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.166 10:15:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.166 10:15:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.166 10:15:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.166 10:15:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.166 10:15:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.166 10:15:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.166 10:15:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.166 10:15:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.166 10:15:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.166 10:15:42 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.166 10:15:42 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:58.166 10:15:42 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.166 00:06:58.166 real 0m1.305s 00:06:58.166 user 0m1.198s 00:06:58.166 sys 0m0.121s 00:06:58.166 10:15:42 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.166 10:15:42 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:58.166 ************************************ 00:06:58.166 END TEST accel_compare 00:06:58.166 ************************************ 00:06:58.166 10:15:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.166 10:15:42 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:58.166 10:15:42 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:58.166 10:15:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.166 10:15:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.166 ************************************ 00:06:58.166 START TEST accel_xor 00:06:58.166 ************************************ 00:06:58.167 10:15:42 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:58.167 [2024-07-14 10:15:42.846347] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:58.167 [2024-07-14 10:15:42.846414] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2221728 ] 00:06:58.167 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.167 [2024-07-14 10:15:42.914481] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.167 [2024-07-14 10:15:42.954112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:58.167 10:15:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.167 10:15:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.543 00:06:59.543 real 0m1.307s 00:06:59.543 user 0m1.194s 00:06:59.543 sys 0m0.127s 00:06:59.543 10:15:44 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.543 10:15:44 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:59.543 ************************************ 00:06:59.543 END TEST accel_xor 00:06:59.543 ************************************ 00:06:59.543 10:15:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:59.543 10:15:44 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:59.543 10:15:44 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:59.543 10:15:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.543 10:15:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.543 ************************************ 00:06:59.543 START TEST accel_xor 00:06:59.543 ************************************ 00:06:59.543 10:15:44 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:59.543 10:15:44 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:59.543 [2024-07-14 10:15:44.222027] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:59.544 [2024-07-14 10:15:44.222076] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2221979 ] 00:06:59.544 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.544 [2024-07-14 10:15:44.289716] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.544 [2024-07-14 10:15:44.330393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.544 10:15:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.924 10:15:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.924 10:15:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.924 10:15:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.924 10:15:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.924 10:15:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.924 10:15:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.924 10:15:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.924 10:15:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.924 10:15:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.924 10:15:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.924 10:15:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.924 10:15:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.924 10:15:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.924 10:15:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.924 10:15:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.924 10:15:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.924 10:15:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.924 10:15:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.924 10:15:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.924 10:15:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.924 10:15:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.924 10:15:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.924 10:15:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.924 10:15:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.924 10:15:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.924 10:15:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:00.924 10:15:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.924 00:07:00.924 real 0m1.307s 00:07:00.924 user 0m1.196s 00:07:00.924 sys 0m0.124s 00:07:00.924 10:15:45 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.924 10:15:45 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:00.924 ************************************ 00:07:00.924 END TEST accel_xor 00:07:00.924 ************************************ 00:07:00.924 10:15:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:00.924 10:15:45 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:00.924 10:15:45 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:00.924 10:15:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.924 10:15:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.924 ************************************ 00:07:00.924 START TEST accel_dif_verify 00:07:00.924 ************************************ 00:07:00.924 10:15:45 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:00.924 [2024-07-14 10:15:45.596852] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:00.924 [2024-07-14 10:15:45.596921] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2222224 ] 00:07:00.924 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.924 [2024-07-14 10:15:45.665240] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.924 [2024-07-14 10:15:45.705558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.924 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.925 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.925 10:15:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:00.925 10:15:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.925 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.925 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.925 10:15:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.925 10:15:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.925 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.925 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.925 10:15:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:00.925 10:15:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.925 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.925 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.925 10:15:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:00.925 10:15:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.925 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.925 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.925 10:15:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:00.925 10:15:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.925 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.925 10:15:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.304 10:15:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:02.304 10:15:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.304 10:15:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.304 10:15:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.304 10:15:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:02.304 10:15:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.304 10:15:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.304 10:15:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.304 10:15:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:02.304 10:15:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.304 10:15:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.304 10:15:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.304 10:15:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:02.304 10:15:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.304 10:15:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.304 10:15:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.304 10:15:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:02.304 10:15:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.304 10:15:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.304 10:15:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.304 10:15:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:02.304 10:15:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.304 10:15:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.304 10:15:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.304 10:15:46 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.304 10:15:46 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:02.304 10:15:46 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.304 00:07:02.304 real 0m1.309s 00:07:02.304 user 0m1.198s 00:07:02.304 sys 0m0.126s 00:07:02.304 10:15:46 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.304 10:15:46 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:02.304 ************************************ 00:07:02.304 END TEST accel_dif_verify 00:07:02.304 ************************************ 00:07:02.304 10:15:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.304 10:15:46 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:02.304 10:15:46 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:02.304 10:15:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.304 10:15:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.304 ************************************ 00:07:02.304 START TEST accel_dif_generate 00:07:02.304 ************************************ 00:07:02.304 10:15:46 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:02.304 10:15:46 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:02.304 10:15:46 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:02.304 10:15:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.304 10:15:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.305 10:15:46 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:02.305 10:15:46 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:02.305 10:15:46 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:02.305 10:15:46 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.305 10:15:46 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.305 10:15:46 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.305 10:15:46 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.305 10:15:46 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.305 10:15:46 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:02.305 10:15:46 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:02.305 [2024-07-14 10:15:46.975212] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:02.305 [2024-07-14 10:15:46.975277] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2222469 ] 00:07:02.305 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.305 [2024-07-14 10:15:47.046876] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.305 [2024-07-14 10:15:47.088072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.305 10:15:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:03.685 10:15:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:03.685 10:15:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:03.685 10:15:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:03.685 10:15:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:03.685 10:15:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:03.685 10:15:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:03.685 10:15:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:03.685 10:15:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:03.685 10:15:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:03.685 10:15:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:03.685 10:15:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:03.685 10:15:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:03.685 10:15:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:03.685 10:15:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:03.685 10:15:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:03.685 10:15:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:03.685 10:15:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:03.685 10:15:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:03.685 10:15:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:03.685 10:15:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:03.685 10:15:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:03.685 10:15:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:03.685 10:15:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:03.685 10:15:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:03.685 10:15:48 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.685 10:15:48 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:03.685 10:15:48 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.685 00:07:03.685 real 0m1.314s 00:07:03.685 user 0m1.204s 00:07:03.685 sys 0m0.125s 00:07:03.685 10:15:48 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.685 10:15:48 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:03.685 ************************************ 00:07:03.685 END TEST accel_dif_generate 00:07:03.685 ************************************ 00:07:03.685 10:15:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:03.685 10:15:48 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:03.685 10:15:48 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:03.685 10:15:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.685 10:15:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.685 ************************************ 00:07:03.685 START TEST accel_dif_generate_copy 00:07:03.685 ************************************ 00:07:03.685 10:15:48 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:03.685 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:03.685 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:03.685 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.685 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.685 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:03.685 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:03.685 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:03.685 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.685 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.685 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.685 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.685 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.685 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:03.685 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:03.685 [2024-07-14 10:15:48.357184] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:03.685 [2024-07-14 10:15:48.357267] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2222721 ] 00:07:03.685 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.685 [2024-07-14 10:15:48.425703] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.685 [2024-07-14 10:15:48.465626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.685 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:03.685 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.685 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.686 10:15:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.065 10:15:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.065 10:15:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.065 10:15:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.065 10:15:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.065 10:15:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.065 10:15:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.065 10:15:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.065 10:15:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.065 10:15:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.065 10:15:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.065 10:15:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.065 10:15:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.065 10:15:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.065 10:15:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.065 10:15:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.065 10:15:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.065 10:15:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.065 10:15:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.065 10:15:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.065 10:15:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.065 10:15:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.065 10:15:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.065 10:15:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.065 10:15:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.066 10:15:49 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.066 10:15:49 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:05.066 10:15:49 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.066 00:07:05.066 real 0m1.309s 00:07:05.066 user 0m1.198s 00:07:05.066 sys 0m0.125s 00:07:05.066 10:15:49 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.066 10:15:49 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:05.066 ************************************ 00:07:05.066 END TEST accel_dif_generate_copy 00:07:05.066 ************************************ 00:07:05.066 10:15:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.066 10:15:49 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:05.066 10:15:49 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:05.066 10:15:49 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:05.066 10:15:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.066 10:15:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.066 ************************************ 00:07:05.066 START TEST accel_comp 00:07:05.066 ************************************ 00:07:05.066 10:15:49 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:05.066 [2024-07-14 10:15:49.734283] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:05.066 [2024-07-14 10:15:49.734358] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2222965 ] 00:07:05.066 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.066 [2024-07-14 10:15:49.802124] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.066 [2024-07-14 10:15:49.842133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.066 10:15:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.444 10:15:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:06.444 10:15:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.444 10:15:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.444 10:15:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.444 10:15:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:06.444 10:15:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.444 10:15:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.444 10:15:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.444 10:15:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:06.444 10:15:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.444 10:15:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.444 10:15:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.444 10:15:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:06.444 10:15:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.444 10:15:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.444 10:15:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.444 10:15:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:06.444 10:15:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.444 10:15:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.444 10:15:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.444 10:15:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:06.444 10:15:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.444 10:15:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.444 10:15:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.444 10:15:51 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.444 10:15:51 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:06.444 10:15:51 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.444 00:07:06.444 real 0m1.311s 00:07:06.444 user 0m1.199s 00:07:06.444 sys 0m0.125s 00:07:06.444 10:15:51 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.444 10:15:51 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:06.444 ************************************ 00:07:06.444 END TEST accel_comp 00:07:06.444 ************************************ 00:07:06.444 10:15:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:06.444 10:15:51 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:06.444 10:15:51 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:06.444 10:15:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.444 10:15:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.444 ************************************ 00:07:06.444 START TEST accel_decomp 00:07:06.444 ************************************ 00:07:06.444 10:15:51 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:06.444 10:15:51 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:06.444 10:15:51 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:06.444 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.444 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.444 10:15:51 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:06.444 10:15:51 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:06.444 10:15:51 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:06.444 10:15:51 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.444 10:15:51 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.444 10:15:51 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.444 10:15:51 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.444 10:15:51 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:06.445 [2024-07-14 10:15:51.111003] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:06.445 [2024-07-14 10:15:51.111071] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2223208 ] 00:07:06.445 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.445 [2024-07-14 10:15:51.179608] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.445 [2024-07-14 10:15:51.221907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.445 10:15:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:07.823 10:15:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:07.823 10:15:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.823 10:15:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:07.823 10:15:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:07.823 10:15:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:07.823 10:15:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.823 10:15:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:07.823 10:15:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:07.823 10:15:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:07.823 10:15:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.823 10:15:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:07.823 10:15:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:07.823 10:15:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:07.823 10:15:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.823 10:15:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:07.823 10:15:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:07.823 10:15:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:07.823 10:15:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.823 10:15:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:07.823 10:15:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:07.823 10:15:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:07.823 10:15:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.823 10:15:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:07.823 10:15:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:07.823 10:15:52 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.823 10:15:52 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:07.823 10:15:52 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.823 00:07:07.823 real 0m1.315s 00:07:07.823 user 0m1.201s 00:07:07.823 sys 0m0.129s 00:07:07.823 10:15:52 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.823 10:15:52 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:07.823 ************************************ 00:07:07.823 END TEST accel_decomp 00:07:07.823 ************************************ 00:07:07.823 10:15:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.823 10:15:52 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:07.823 10:15:52 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:07.824 10:15:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.824 10:15:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.824 ************************************ 00:07:07.824 START TEST accel_decomp_full 00:07:07.824 ************************************ 00:07:07.824 10:15:52 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:07.824 [2024-07-14 10:15:52.492648] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:07.824 [2024-07-14 10:15:52.492717] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2223461 ] 00:07:07.824 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.824 [2024-07-14 10:15:52.563373] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.824 [2024-07-14 10:15:52.603728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.824 10:15:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.204 10:15:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:09.204 10:15:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.204 10:15:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.204 10:15:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.204 10:15:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:09.204 10:15:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.204 10:15:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.204 10:15:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.204 10:15:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:09.204 10:15:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.204 10:15:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.204 10:15:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.204 10:15:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:09.204 10:15:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.204 10:15:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.204 10:15:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.204 10:15:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:09.204 10:15:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.204 10:15:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.204 10:15:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.204 10:15:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:09.204 10:15:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.204 10:15:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.204 10:15:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.204 10:15:53 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.204 10:15:53 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:09.204 10:15:53 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.204 00:07:09.204 real 0m1.321s 00:07:09.204 user 0m1.205s 00:07:09.204 sys 0m0.129s 00:07:09.204 10:15:53 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.204 10:15:53 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:09.204 ************************************ 00:07:09.204 END TEST accel_decomp_full 00:07:09.204 ************************************ 00:07:09.204 10:15:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:09.204 10:15:53 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:09.204 10:15:53 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:09.204 10:15:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.204 10:15:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.204 ************************************ 00:07:09.204 START TEST accel_decomp_mcore 00:07:09.204 ************************************ 00:07:09.204 10:15:53 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:09.204 10:15:53 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:09.204 10:15:53 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:09.204 10:15:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.204 10:15:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.204 10:15:53 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:09.204 10:15:53 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:09.204 10:15:53 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:09.204 10:15:53 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.204 10:15:53 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.204 10:15:53 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.204 10:15:53 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.204 10:15:53 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.204 10:15:53 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:09.204 10:15:53 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:09.204 [2024-07-14 10:15:53.879041] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:09.204 [2024-07-14 10:15:53.879108] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2223713 ] 00:07:09.204 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.204 [2024-07-14 10:15:53.946835] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:09.204 [2024-07-14 10:15:53.988767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.204 [2024-07-14 10:15:53.988878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.204 [2024-07-14 10:15:53.988984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.204 [2024-07-14 10:15:53.988985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.204 10:15:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.581 00:07:10.581 real 0m1.318s 00:07:10.581 user 0m4.524s 00:07:10.581 sys 0m0.135s 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.581 10:15:55 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:10.581 ************************************ 00:07:10.581 END TEST accel_decomp_mcore 00:07:10.581 ************************************ 00:07:10.581 10:15:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.581 10:15:55 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:10.581 10:15:55 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:10.581 10:15:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.581 10:15:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.581 ************************************ 00:07:10.581 START TEST accel_decomp_full_mcore 00:07:10.581 ************************************ 00:07:10.581 10:15:55 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:10.581 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:10.581 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:10.581 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.581 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.581 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:10.581 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:10.581 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:10.581 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.581 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.581 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.581 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.581 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.581 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:10.581 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:10.582 [2024-07-14 10:15:55.263815] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:10.582 [2024-07-14 10:15:55.263886] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2223964 ] 00:07:10.582 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.582 [2024-07-14 10:15:55.331651] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:10.582 [2024-07-14 10:15:55.374170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.582 [2024-07-14 10:15:55.374280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.582 [2024-07-14 10:15:55.374321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.582 [2024-07-14 10:15:55.374320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.582 10:15:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.982 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.982 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.982 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.982 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.982 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.982 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.982 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.982 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.982 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.982 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.982 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.982 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.982 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.982 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.982 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.982 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.982 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.982 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.982 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.982 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.982 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.982 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.982 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.982 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.983 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.983 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.983 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.983 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.983 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.983 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.983 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.983 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.983 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.983 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.983 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.983 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.983 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.983 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:11.983 10:15:56 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.983 00:07:11.983 real 0m1.331s 00:07:11.983 user 0m4.574s 00:07:11.983 sys 0m0.134s 00:07:11.983 10:15:56 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.983 10:15:56 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:11.983 ************************************ 00:07:11.983 END TEST accel_decomp_full_mcore 00:07:11.983 ************************************ 00:07:11.983 10:15:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.983 10:15:56 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:11.983 10:15:56 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:11.983 10:15:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.983 10:15:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.983 ************************************ 00:07:11.983 START TEST accel_decomp_mthread 00:07:11.983 ************************************ 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:11.983 [2024-07-14 10:15:56.661155] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:11.983 [2024-07-14 10:15:56.661221] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224225 ] 00:07:11.983 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.983 [2024-07-14 10:15:56.728745] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.983 [2024-07-14 10:15:56.768445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.983 10:15:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.363 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:13.363 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.363 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.363 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.363 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:13.363 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.363 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.363 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.363 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:13.363 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.363 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.363 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.363 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:13.363 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.363 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.363 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.363 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:13.363 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.363 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.363 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.363 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:13.363 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.363 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.363 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.363 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:13.363 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.363 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.363 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.364 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.364 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:13.364 10:15:57 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.364 00:07:13.364 real 0m1.311s 00:07:13.364 user 0m1.206s 00:07:13.364 sys 0m0.119s 00:07:13.364 10:15:57 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.364 10:15:57 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:13.364 ************************************ 00:07:13.364 END TEST accel_decomp_mthread 00:07:13.364 ************************************ 00:07:13.364 10:15:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:13.364 10:15:57 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:13.364 10:15:57 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:13.364 10:15:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.364 10:15:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.364 ************************************ 00:07:13.364 START TEST accel_decomp_full_mthread 00:07:13.364 ************************************ 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:13.364 [2024-07-14 10:15:58.037151] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:13.364 [2024-07-14 10:15:58.037204] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224472 ] 00:07:13.364 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.364 [2024-07-14 10:15:58.104596] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.364 [2024-07-14 10:15:58.144189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.364 10:15:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.746 00:07:14.746 real 0m1.337s 00:07:14.746 user 0m1.224s 00:07:14.746 sys 0m0.126s 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.746 10:15:59 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:14.746 ************************************ 00:07:14.746 END TEST accel_decomp_full_mthread 00:07:14.746 ************************************ 00:07:14.746 10:15:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.746 10:15:59 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:14.746 10:15:59 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:14.746 10:15:59 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:14.746 10:15:59 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:14.746 10:15:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.746 10:15:59 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.746 10:15:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.746 10:15:59 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.746 10:15:59 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.746 10:15:59 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.746 10:15:59 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.746 10:15:59 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:14.746 10:15:59 accel -- accel/accel.sh@41 -- # jq -r . 00:07:14.746 ************************************ 00:07:14.746 START TEST accel_dif_functional_tests 00:07:14.746 ************************************ 00:07:14.746 10:15:59 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:14.746 [2024-07-14 10:15:59.460232] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:14.746 [2024-07-14 10:15:59.460270] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224724 ] 00:07:14.746 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.746 [2024-07-14 10:15:59.527494] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:14.746 [2024-07-14 10:15:59.568902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.746 [2024-07-14 10:15:59.569006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.746 [2024-07-14 10:15:59.569007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.746 00:07:14.746 00:07:14.746 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.746 http://cunit.sourceforge.net/ 00:07:14.746 00:07:14.746 00:07:14.746 Suite: accel_dif 00:07:14.746 Test: verify: DIF generated, GUARD check ...passed 00:07:14.746 Test: verify: DIF generated, APPTAG check ...passed 00:07:14.746 Test: verify: DIF generated, REFTAG check ...passed 00:07:14.746 Test: verify: DIF not generated, GUARD check ...[2024-07-14 10:15:59.632703] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:14.746 passed 00:07:14.746 Test: verify: DIF not generated, APPTAG check ...[2024-07-14 10:15:59.632751] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:14.746 passed 00:07:14.746 Test: verify: DIF not generated, REFTAG check ...[2024-07-14 10:15:59.632785] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:14.746 passed 00:07:14.746 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:14.746 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-14 10:15:59.632829] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:14.746 passed 00:07:14.746 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:14.746 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:14.746 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:14.746 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-14 10:15:59.632924] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:14.747 passed 00:07:14.747 Test: verify copy: DIF generated, GUARD check ...passed 00:07:14.747 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:14.747 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:14.747 Test: verify copy: DIF not generated, GUARD check ...[2024-07-14 10:15:59.633028] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:14.747 passed 00:07:14.747 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-14 10:15:59.633048] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:14.747 passed 00:07:14.747 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-14 10:15:59.633066] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:14.747 passed 00:07:14.747 Test: generate copy: DIF generated, GUARD check ...passed 00:07:14.747 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:14.747 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:14.747 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:14.747 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:14.747 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:14.747 Test: generate copy: iovecs-len validate ...[2024-07-14 10:15:59.633231] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:14.747 passed 00:07:14.747 Test: generate copy: buffer alignment validate ...passed 00:07:14.747 00:07:14.747 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.747 suites 1 1 n/a 0 0 00:07:14.747 tests 26 26 26 0 0 00:07:14.747 asserts 115 115 115 0 n/a 00:07:14.747 00:07:14.747 Elapsed time = 0.002 seconds 00:07:15.006 00:07:15.006 real 0m0.377s 00:07:15.006 user 0m0.576s 00:07:15.006 sys 0m0.145s 00:07:15.006 10:15:59 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.006 10:15:59 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:15.006 ************************************ 00:07:15.006 END TEST accel_dif_functional_tests 00:07:15.006 ************************************ 00:07:15.006 10:15:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:15.006 00:07:15.006 real 0m29.943s 00:07:15.006 user 0m33.406s 00:07:15.006 sys 0m4.457s 00:07:15.006 10:15:59 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.006 10:15:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.006 ************************************ 00:07:15.006 END TEST accel 00:07:15.006 ************************************ 00:07:15.006 10:15:59 -- common/autotest_common.sh@1142 -- # return 0 00:07:15.006 10:15:59 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:15.006 10:15:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:15.006 10:15:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.006 10:15:59 -- common/autotest_common.sh@10 -- # set +x 00:07:15.006 ************************************ 00:07:15.006 START TEST accel_rpc 00:07:15.006 ************************************ 00:07:15.006 10:15:59 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:15.006 * Looking for test storage... 00:07:15.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:15.006 10:15:59 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:15.394 10:15:59 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2224991 00:07:15.394 10:15:59 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2224991 00:07:15.394 10:15:59 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:15.394 10:15:59 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2224991 ']' 00:07:15.394 10:15:59 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.394 10:15:59 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.394 10:15:59 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.394 10:15:59 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.394 10:15:59 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.394 [2024-07-14 10:16:00.039768] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:15.394 [2024-07-14 10:16:00.039816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224991 ] 00:07:15.394 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.394 [2024-07-14 10:16:00.108431] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.394 [2024-07-14 10:16:00.149623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.394 10:16:00 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.394 10:16:00 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:15.394 10:16:00 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:15.394 10:16:00 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:15.394 10:16:00 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:15.394 10:16:00 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:15.394 10:16:00 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:15.394 10:16:00 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:15.394 10:16:00 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.394 10:16:00 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.394 ************************************ 00:07:15.394 START TEST accel_assign_opcode 00:07:15.394 ************************************ 00:07:15.394 10:16:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:15.394 10:16:00 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:15.394 10:16:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.394 10:16:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:15.394 [2024-07-14 10:16:00.218100] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:15.394 10:16:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.394 10:16:00 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:15.394 10:16:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.394 10:16:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:15.394 [2024-07-14 10:16:00.226099] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:15.394 10:16:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.394 10:16:00 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:15.394 10:16:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.394 10:16:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:15.654 10:16:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.654 10:16:00 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:15.654 10:16:00 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:15.654 10:16:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.654 10:16:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:15.654 10:16:00 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:15.654 10:16:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.654 software 00:07:15.654 00:07:15.654 real 0m0.226s 00:07:15.654 user 0m0.048s 00:07:15.654 sys 0m0.010s 00:07:15.654 10:16:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.654 10:16:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:15.654 ************************************ 00:07:15.654 END TEST accel_assign_opcode 00:07:15.654 ************************************ 00:07:15.654 10:16:00 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:15.654 10:16:00 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2224991 00:07:15.654 10:16:00 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2224991 ']' 00:07:15.654 10:16:00 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2224991 00:07:15.654 10:16:00 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:15.654 10:16:00 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:15.654 10:16:00 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2224991 00:07:15.654 10:16:00 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:15.654 10:16:00 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:15.654 10:16:00 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2224991' 00:07:15.654 killing process with pid 2224991 00:07:15.654 10:16:00 accel_rpc -- common/autotest_common.sh@967 -- # kill 2224991 00:07:15.654 10:16:00 accel_rpc -- common/autotest_common.sh@972 -- # wait 2224991 00:07:15.913 00:07:15.913 real 0m0.916s 00:07:15.913 user 0m0.873s 00:07:15.913 sys 0m0.390s 00:07:15.913 10:16:00 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.913 10:16:00 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.913 ************************************ 00:07:15.913 END TEST accel_rpc 00:07:15.913 ************************************ 00:07:15.913 10:16:00 -- common/autotest_common.sh@1142 -- # return 0 00:07:15.913 10:16:00 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:15.913 10:16:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:15.913 10:16:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.913 10:16:00 -- common/autotest_common.sh@10 -- # set +x 00:07:15.913 ************************************ 00:07:15.913 START TEST app_cmdline 00:07:15.913 ************************************ 00:07:15.913 10:16:00 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:16.173 * Looking for test storage... 00:07:16.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:16.173 10:16:00 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:16.173 10:16:00 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2225186 00:07:16.173 10:16:00 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2225186 00:07:16.173 10:16:00 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:16.173 10:16:00 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2225186 ']' 00:07:16.173 10:16:00 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.173 10:16:00 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:16.173 10:16:00 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.173 10:16:00 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:16.173 10:16:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:16.173 [2024-07-14 10:16:01.028080] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:16.173 [2024-07-14 10:16:01.028134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225186 ] 00:07:16.173 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.173 [2024-07-14 10:16:01.097416] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.173 [2024-07-14 10:16:01.137130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.110 10:16:01 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:17.110 10:16:01 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:17.110 10:16:01 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:17.110 { 00:07:17.110 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:07:17.110 "fields": { 00:07:17.110 "major": 24, 00:07:17.110 "minor": 9, 00:07:17.110 "patch": 0, 00:07:17.110 "suffix": "-pre", 00:07:17.110 "commit": "719d03c6a" 00:07:17.110 } 00:07:17.110 } 00:07:17.110 10:16:02 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:17.110 10:16:02 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:17.110 10:16:02 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:17.110 10:16:02 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:17.110 10:16:02 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:17.110 10:16:02 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:17.110 10:16:02 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:17.110 10:16:02 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.110 10:16:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:17.110 10:16:02 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.110 10:16:02 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:17.110 10:16:02 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:17.110 10:16:02 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:17.110 10:16:02 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:17.110 10:16:02 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:17.110 10:16:02 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:17.110 10:16:02 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.110 10:16:02 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:17.110 10:16:02 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.110 10:16:02 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:17.110 10:16:02 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.110 10:16:02 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:17.110 10:16:02 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:17.110 10:16:02 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:17.369 request: 00:07:17.369 { 00:07:17.369 "method": "env_dpdk_get_mem_stats", 00:07:17.369 "req_id": 1 00:07:17.369 } 00:07:17.369 Got JSON-RPC error response 00:07:17.369 response: 00:07:17.369 { 00:07:17.369 "code": -32601, 00:07:17.369 "message": "Method not found" 00:07:17.369 } 00:07:17.369 10:16:02 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:17.369 10:16:02 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:17.369 10:16:02 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:17.369 10:16:02 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:17.369 10:16:02 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2225186 00:07:17.369 10:16:02 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2225186 ']' 00:07:17.369 10:16:02 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2225186 00:07:17.369 10:16:02 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:17.369 10:16:02 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:17.369 10:16:02 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2225186 00:07:17.369 10:16:02 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:17.369 10:16:02 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:17.369 10:16:02 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2225186' 00:07:17.369 killing process with pid 2225186 00:07:17.369 10:16:02 app_cmdline -- common/autotest_common.sh@967 -- # kill 2225186 00:07:17.369 10:16:02 app_cmdline -- common/autotest_common.sh@972 -- # wait 2225186 00:07:17.627 00:07:17.627 real 0m1.699s 00:07:17.627 user 0m2.027s 00:07:17.627 sys 0m0.456s 00:07:17.627 10:16:02 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.627 10:16:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:17.627 ************************************ 00:07:17.627 END TEST app_cmdline 00:07:17.627 ************************************ 00:07:17.886 10:16:02 -- common/autotest_common.sh@1142 -- # return 0 00:07:17.886 10:16:02 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:17.886 10:16:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.886 10:16:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.886 10:16:02 -- common/autotest_common.sh@10 -- # set +x 00:07:17.886 ************************************ 00:07:17.886 START TEST version 00:07:17.886 ************************************ 00:07:17.886 10:16:02 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:17.886 * Looking for test storage... 00:07:17.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:17.886 10:16:02 version -- app/version.sh@17 -- # get_header_version major 00:07:17.886 10:16:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:17.886 10:16:02 version -- app/version.sh@14 -- # cut -f2 00:07:17.886 10:16:02 version -- app/version.sh@14 -- # tr -d '"' 00:07:17.886 10:16:02 version -- app/version.sh@17 -- # major=24 00:07:17.886 10:16:02 version -- app/version.sh@18 -- # get_header_version minor 00:07:17.886 10:16:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:17.886 10:16:02 version -- app/version.sh@14 -- # cut -f2 00:07:17.886 10:16:02 version -- app/version.sh@14 -- # tr -d '"' 00:07:17.886 10:16:02 version -- app/version.sh@18 -- # minor=9 00:07:17.886 10:16:02 version -- app/version.sh@19 -- # get_header_version patch 00:07:17.886 10:16:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:17.886 10:16:02 version -- app/version.sh@14 -- # cut -f2 00:07:17.886 10:16:02 version -- app/version.sh@14 -- # tr -d '"' 00:07:17.886 10:16:02 version -- app/version.sh@19 -- # patch=0 00:07:17.886 10:16:02 version -- app/version.sh@20 -- # get_header_version suffix 00:07:17.886 10:16:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:17.886 10:16:02 version -- app/version.sh@14 -- # cut -f2 00:07:17.886 10:16:02 version -- app/version.sh@14 -- # tr -d '"' 00:07:17.886 10:16:02 version -- app/version.sh@20 -- # suffix=-pre 00:07:17.886 10:16:02 version -- app/version.sh@22 -- # version=24.9 00:07:17.886 10:16:02 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:17.886 10:16:02 version -- app/version.sh@28 -- # version=24.9rc0 00:07:17.886 10:16:02 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:17.886 10:16:02 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:17.886 10:16:02 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:17.886 10:16:02 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:17.886 00:07:17.886 real 0m0.159s 00:07:17.886 user 0m0.077s 00:07:17.886 sys 0m0.119s 00:07:17.886 10:16:02 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.886 10:16:02 version -- common/autotest_common.sh@10 -- # set +x 00:07:17.886 ************************************ 00:07:17.886 END TEST version 00:07:17.886 ************************************ 00:07:17.886 10:16:02 -- common/autotest_common.sh@1142 -- # return 0 00:07:17.886 10:16:02 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:17.886 10:16:02 -- spdk/autotest.sh@198 -- # uname -s 00:07:17.886 10:16:02 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:17.886 10:16:02 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:17.886 10:16:02 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:17.886 10:16:02 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:17.886 10:16:02 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:17.886 10:16:02 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:17.886 10:16:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:17.886 10:16:02 -- common/autotest_common.sh@10 -- # set +x 00:07:18.146 10:16:02 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:18.146 10:16:02 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:18.146 10:16:02 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:18.146 10:16:02 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:18.146 10:16:02 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:18.146 10:16:02 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:18.146 10:16:02 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:18.146 10:16:02 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:18.146 10:16:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.146 10:16:02 -- common/autotest_common.sh@10 -- # set +x 00:07:18.146 ************************************ 00:07:18.146 START TEST nvmf_tcp 00:07:18.146 ************************************ 00:07:18.146 10:16:02 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:18.146 * Looking for test storage... 00:07:18.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:18.146 10:16:03 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.146 10:16:03 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.146 10:16:03 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.146 10:16:03 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.146 10:16:03 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.146 10:16:03 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.146 10:16:03 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:18.146 10:16:03 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:18.146 10:16:03 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:18.146 10:16:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:18.146 10:16:03 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:18.146 10:16:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:18.146 10:16:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.146 10:16:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:18.146 ************************************ 00:07:18.146 START TEST nvmf_example 00:07:18.146 ************************************ 00:07:18.146 10:16:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:18.407 * Looking for test storage... 00:07:18.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:18.407 10:16:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:24.980 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:24.980 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:24.980 Found net devices under 0000:86:00.0: cvl_0_0 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:24.980 Found net devices under 0000:86:00.1: cvl_0_1 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:24.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:24.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:07:24.980 00:07:24.980 --- 10.0.0.2 ping statistics --- 00:07:24.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.980 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:07:24.980 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:24.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:24.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:07:24.980 00:07:24.980 --- 10.0.0.1 ping statistics --- 00:07:24.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.980 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:07:24.981 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:24.981 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:24.981 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:24.981 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:24.981 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:24.981 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:24.981 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:24.981 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:24.981 10:16:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:24.981 10:16:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:24.981 10:16:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:24.981 10:16:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:24.981 10:16:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:24.981 10:16:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:24.981 10:16:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:24.981 10:16:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2228696 00:07:24.981 10:16:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:24.981 10:16:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:24.981 10:16:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2228696 00:07:24.981 10:16:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 2228696 ']' 00:07:24.981 10:16:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.981 10:16:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:24.981 10:16:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.981 10:16:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:24.981 10:16:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:24.981 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.981 10:16:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:24.981 10:16:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:24.981 10:16:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:24.981 10:16:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:24.981 10:16:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:24.981 10:16:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:24.981 10:16:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.981 10:16:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:24.981 10:16:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.981 10:16:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:24.981 10:16:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.981 10:16:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:25.241 10:16:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.241 10:16:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:25.241 10:16:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:25.241 10:16:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.241 10:16:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:25.241 10:16:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.241 10:16:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:25.241 10:16:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:25.241 10:16:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.241 10:16:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:25.241 10:16:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.241 10:16:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:25.241 10:16:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.241 10:16:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:25.241 10:16:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.241 10:16:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:25.241 10:16:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:25.241 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.450 Initializing NVMe Controllers 00:07:37.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:37.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:37.450 Initialization complete. Launching workers. 00:07:37.450 ======================================================== 00:07:37.450 Latency(us) 00:07:37.450 Device Information : IOPS MiB/s Average min max 00:07:37.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18183.90 71.03 3519.91 684.06 20156.83 00:07:37.450 ======================================================== 00:07:37.450 Total : 18183.90 71.03 3519.91 684.06 20156.83 00:07:37.450 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:37.450 rmmod nvme_tcp 00:07:37.450 rmmod nvme_fabrics 00:07:37.450 rmmod nvme_keyring 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2228696 ']' 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2228696 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 2228696 ']' 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 2228696 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2228696 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2228696' 00:07:37.450 killing process with pid 2228696 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 2228696 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 2228696 00:07:37.450 nvmf threads initialize successfully 00:07:37.450 bdev subsystem init successfully 00:07:37.450 created a nvmf target service 00:07:37.450 create targets's poll groups done 00:07:37.450 all subsystems of target started 00:07:37.450 nvmf target is running 00:07:37.450 all subsystems of target stopped 00:07:37.450 destroy targets's poll groups done 00:07:37.450 destroyed the nvmf target service 00:07:37.450 bdev subsystem finish successfully 00:07:37.450 nvmf threads destroy successfully 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:37.450 10:16:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.709 10:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:37.709 10:16:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:37.709 10:16:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:37.709 10:16:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.709 00:07:37.709 real 0m19.590s 00:07:37.709 user 0m46.135s 00:07:37.709 sys 0m5.826s 00:07:37.709 10:16:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.709 10:16:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.709 ************************************ 00:07:37.709 END TEST nvmf_example 00:07:37.709 ************************************ 00:07:37.972 10:16:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:37.972 10:16:22 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:37.972 10:16:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:37.972 10:16:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.972 10:16:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:37.972 ************************************ 00:07:37.972 START TEST nvmf_filesystem 00:07:37.972 ************************************ 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:37.972 * Looking for test storage... 00:07:37.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:37.972 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:37.973 #define SPDK_CONFIG_H 00:07:37.973 #define SPDK_CONFIG_APPS 1 00:07:37.973 #define SPDK_CONFIG_ARCH native 00:07:37.973 #undef SPDK_CONFIG_ASAN 00:07:37.973 #undef SPDK_CONFIG_AVAHI 00:07:37.973 #undef SPDK_CONFIG_CET 00:07:37.973 #define SPDK_CONFIG_COVERAGE 1 00:07:37.973 #define SPDK_CONFIG_CROSS_PREFIX 00:07:37.973 #undef SPDK_CONFIG_CRYPTO 00:07:37.973 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:37.973 #undef SPDK_CONFIG_CUSTOMOCF 00:07:37.973 #undef SPDK_CONFIG_DAOS 00:07:37.973 #define SPDK_CONFIG_DAOS_DIR 00:07:37.973 #define SPDK_CONFIG_DEBUG 1 00:07:37.973 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:37.973 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:37.973 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:37.973 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:37.973 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:37.973 #undef SPDK_CONFIG_DPDK_UADK 00:07:37.973 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:37.973 #define SPDK_CONFIG_EXAMPLES 1 00:07:37.973 #undef SPDK_CONFIG_FC 00:07:37.973 #define SPDK_CONFIG_FC_PATH 00:07:37.973 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:37.973 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:37.973 #undef SPDK_CONFIG_FUSE 00:07:37.973 #undef SPDK_CONFIG_FUZZER 00:07:37.973 #define SPDK_CONFIG_FUZZER_LIB 00:07:37.973 #undef SPDK_CONFIG_GOLANG 00:07:37.973 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:37.973 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:37.973 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:37.973 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:37.973 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:37.973 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:37.973 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:37.973 #define SPDK_CONFIG_IDXD 1 00:07:37.973 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:37.973 #undef SPDK_CONFIG_IPSEC_MB 00:07:37.973 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:37.973 #define SPDK_CONFIG_ISAL 1 00:07:37.973 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:37.973 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:37.973 #define SPDK_CONFIG_LIBDIR 00:07:37.973 #undef SPDK_CONFIG_LTO 00:07:37.973 #define SPDK_CONFIG_MAX_LCORES 128 00:07:37.973 #define SPDK_CONFIG_NVME_CUSE 1 00:07:37.973 #undef SPDK_CONFIG_OCF 00:07:37.973 #define SPDK_CONFIG_OCF_PATH 00:07:37.973 #define SPDK_CONFIG_OPENSSL_PATH 00:07:37.973 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:37.973 #define SPDK_CONFIG_PGO_DIR 00:07:37.973 #undef SPDK_CONFIG_PGO_USE 00:07:37.973 #define SPDK_CONFIG_PREFIX /usr/local 00:07:37.973 #undef SPDK_CONFIG_RAID5F 00:07:37.973 #undef SPDK_CONFIG_RBD 00:07:37.973 #define SPDK_CONFIG_RDMA 1 00:07:37.973 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:37.973 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:37.973 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:37.973 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:37.973 #define SPDK_CONFIG_SHARED 1 00:07:37.973 #undef SPDK_CONFIG_SMA 00:07:37.973 #define SPDK_CONFIG_TESTS 1 00:07:37.973 #undef SPDK_CONFIG_TSAN 00:07:37.973 #define SPDK_CONFIG_UBLK 1 00:07:37.973 #define SPDK_CONFIG_UBSAN 1 00:07:37.973 #undef SPDK_CONFIG_UNIT_TESTS 00:07:37.973 #undef SPDK_CONFIG_URING 00:07:37.973 #define SPDK_CONFIG_URING_PATH 00:07:37.973 #undef SPDK_CONFIG_URING_ZNS 00:07:37.973 #undef SPDK_CONFIG_USDT 00:07:37.973 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:37.973 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:37.973 #define SPDK_CONFIG_VFIO_USER 1 00:07:37.973 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:37.973 #define SPDK_CONFIG_VHOST 1 00:07:37.973 #define SPDK_CONFIG_VIRTIO 1 00:07:37.973 #undef SPDK_CONFIG_VTUNE 00:07:37.973 #define SPDK_CONFIG_VTUNE_DIR 00:07:37.973 #define SPDK_CONFIG_WERROR 1 00:07:37.973 #define SPDK_CONFIG_WPDK_DIR 00:07:37.973 #undef SPDK_CONFIG_XNVME 00:07:37.973 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.973 10:16:22 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v23.11 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:37.974 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2231109 ]] 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2231109 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.FMW0dr 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:37.975 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.FMW0dr/tests/target /tmp/spdk.FMW0dr 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=950202368 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4334227456 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=187939758080 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=195974299648 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8034541568 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97977229312 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9920512 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39185485824 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=39194861568 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9375744 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97986650112 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=499712 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19597422592 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19597426688 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:38.236 * Looking for test storage... 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=187939758080 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=10249134080 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:38.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.236 10:16:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:38.236 10:16:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.236 10:16:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:38.236 10:16:23 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.236 10:16:23 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.236 10:16:23 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.236 10:16:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.236 10:16:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.236 10:16:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.236 10:16:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:38.237 10:16:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.237 10:16:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:38.237 10:16:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:38.237 10:16:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:38.237 10:16:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:38.237 10:16:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.237 10:16:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.237 10:16:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:38.237 10:16:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:38.237 10:16:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:38.237 10:16:23 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:38.237 10:16:23 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:38.237 10:16:23 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:38.237 10:16:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:38.237 10:16:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:38.237 10:16:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:38.237 10:16:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:38.237 10:16:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:38.237 10:16:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.237 10:16:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:38.237 10:16:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.237 10:16:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:38.237 10:16:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:38.237 10:16:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:38.237 10:16:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:44.808 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:44.808 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:44.808 Found net devices under 0000:86:00.0: cvl_0_0 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:44.808 Found net devices under 0000:86:00.1: cvl_0_1 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:44.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:44.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:07:44.808 00:07:44.808 --- 10.0.0.2 ping statistics --- 00:07:44.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.808 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:07:44.808 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:44.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:44.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:07:44.809 00:07:44.809 --- 10.0.0.1 ping statistics --- 00:07:44.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.809 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:07:44.809 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:44.809 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:44.809 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:44.809 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:44.809 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:44.809 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:44.809 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:44.809 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:44.809 10:16:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:44.809 10:16:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:44.809 10:16:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:44.809 10:16:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.809 10:16:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:44.809 ************************************ 00:07:44.809 START TEST nvmf_filesystem_no_in_capsule 00:07:44.809 ************************************ 00:07:44.809 10:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:44.809 10:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:44.809 10:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:44.809 10:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:44.809 10:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:44.809 10:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.809 10:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2234241 00:07:44.809 10:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2234241 00:07:44.809 10:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:44.809 10:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2234241 ']' 00:07:44.809 10:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.809 10:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:44.809 10:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.809 10:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:44.809 10:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.809 [2024-07-14 10:16:28.960865] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:44.809 [2024-07-14 10:16:28.960906] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.809 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.809 [2024-07-14 10:16:29.034920] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.809 [2024-07-14 10:16:29.075232] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.809 [2024-07-14 10:16:29.075290] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.809 [2024-07-14 10:16:29.075297] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.809 [2024-07-14 10:16:29.075303] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.809 [2024-07-14 10:16:29.075309] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.809 [2024-07-14 10:16:29.075368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.809 [2024-07-14 10:16:29.075476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.809 [2024-07-14 10:16:29.075586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.809 [2024-07-14 10:16:29.075588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.809 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.809 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:44.809 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:44.809 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:44.809 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.068 [2024-07-14 10:16:29.810267] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.068 Malloc1 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.068 [2024-07-14 10:16:29.957676] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:45.068 { 00:07:45.068 "name": "Malloc1", 00:07:45.068 "aliases": [ 00:07:45.068 "d96d9dcf-66a7-44c3-8dd4-aea59c1bdfca" 00:07:45.068 ], 00:07:45.068 "product_name": "Malloc disk", 00:07:45.068 "block_size": 512, 00:07:45.068 "num_blocks": 1048576, 00:07:45.068 "uuid": "d96d9dcf-66a7-44c3-8dd4-aea59c1bdfca", 00:07:45.068 "assigned_rate_limits": { 00:07:45.068 "rw_ios_per_sec": 0, 00:07:45.068 "rw_mbytes_per_sec": 0, 00:07:45.068 "r_mbytes_per_sec": 0, 00:07:45.068 "w_mbytes_per_sec": 0 00:07:45.068 }, 00:07:45.068 "claimed": true, 00:07:45.068 "claim_type": "exclusive_write", 00:07:45.068 "zoned": false, 00:07:45.068 "supported_io_types": { 00:07:45.068 "read": true, 00:07:45.068 "write": true, 00:07:45.068 "unmap": true, 00:07:45.068 "flush": true, 00:07:45.068 "reset": true, 00:07:45.068 "nvme_admin": false, 00:07:45.068 "nvme_io": false, 00:07:45.068 "nvme_io_md": false, 00:07:45.068 "write_zeroes": true, 00:07:45.068 "zcopy": true, 00:07:45.068 "get_zone_info": false, 00:07:45.068 "zone_management": false, 00:07:45.068 "zone_append": false, 00:07:45.068 "compare": false, 00:07:45.068 "compare_and_write": false, 00:07:45.068 "abort": true, 00:07:45.068 "seek_hole": false, 00:07:45.068 "seek_data": false, 00:07:45.068 "copy": true, 00:07:45.068 "nvme_iov_md": false 00:07:45.068 }, 00:07:45.068 "memory_domains": [ 00:07:45.068 { 00:07:45.068 "dma_device_id": "system", 00:07:45.068 "dma_device_type": 1 00:07:45.068 }, 00:07:45.068 { 00:07:45.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.068 "dma_device_type": 2 00:07:45.068 } 00:07:45.068 ], 00:07:45.068 "driver_specific": {} 00:07:45.068 } 00:07:45.068 ]' 00:07:45.068 10:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:45.068 10:16:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:45.068 10:16:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:45.327 10:16:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:45.327 10:16:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:45.327 10:16:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:45.327 10:16:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:45.327 10:16:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:46.263 10:16:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:46.263 10:16:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:46.263 10:16:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:46.263 10:16:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:46.263 10:16:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:48.796 10:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:48.796 10:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:48.796 10:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:48.796 10:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:48.796 10:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:48.796 10:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:48.796 10:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:48.797 10:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:48.797 10:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:48.797 10:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:48.797 10:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:48.797 10:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:48.797 10:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:48.797 10:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:48.797 10:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:48.797 10:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:48.797 10:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:48.797 10:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:49.364 10:16:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:50.301 10:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:50.301 10:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:50.301 10:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:50.301 10:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.301 10:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.301 ************************************ 00:07:50.301 START TEST filesystem_ext4 00:07:50.301 ************************************ 00:07:50.301 10:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:50.301 10:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:50.301 10:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:50.301 10:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:50.301 10:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:50.301 10:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:50.301 10:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:50.301 10:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:50.301 10:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:50.301 10:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:50.301 10:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:50.301 mke2fs 1.46.5 (30-Dec-2021) 00:07:50.301 Discarding device blocks: 0/522240 done 00:07:50.301 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:50.301 Filesystem UUID: 72edfc01-61e9-469a-b7a9-a36e6ce9184f 00:07:50.301 Superblock backups stored on blocks: 00:07:50.301 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:50.301 00:07:50.301 Allocating group tables: 0/64 done 00:07:50.301 Writing inode tables: 0/64 done 00:07:51.237 Creating journal (8192 blocks): done 00:07:52.322 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:07:52.322 00:07:52.322 10:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:52.322 10:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:52.890 10:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:53.150 10:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:53.150 10:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:53.150 10:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:53.150 10:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:53.150 10:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:53.150 10:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2234241 00:07:53.150 10:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:53.150 10:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:53.150 10:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:53.150 10:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:53.150 00:07:53.150 real 0m2.806s 00:07:53.150 user 0m0.020s 00:07:53.150 sys 0m0.073s 00:07:53.150 10:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.150 10:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:53.150 ************************************ 00:07:53.150 END TEST filesystem_ext4 00:07:53.150 ************************************ 00:07:53.150 10:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:53.150 10:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:53.150 10:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:53.150 10:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.150 10:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.150 ************************************ 00:07:53.150 START TEST filesystem_btrfs 00:07:53.150 ************************************ 00:07:53.150 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:53.150 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:53.150 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:53.150 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:53.150 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:53.150 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:53.150 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:53.150 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:53.150 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:53.150 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:53.150 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:53.410 btrfs-progs v6.6.2 00:07:53.410 See https://btrfs.readthedocs.io for more information. 00:07:53.410 00:07:53.410 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:53.410 NOTE: several default settings have changed in version 5.15, please make sure 00:07:53.410 this does not affect your deployments: 00:07:53.410 - DUP for metadata (-m dup) 00:07:53.410 - enabled no-holes (-O no-holes) 00:07:53.410 - enabled free-space-tree (-R free-space-tree) 00:07:53.410 00:07:53.410 Label: (null) 00:07:53.410 UUID: b99cf92d-eee1-40bc-b0df-a160f3aab1d0 00:07:53.410 Node size: 16384 00:07:53.410 Sector size: 4096 00:07:53.410 Filesystem size: 510.00MiB 00:07:53.410 Block group profiles: 00:07:53.410 Data: single 8.00MiB 00:07:53.410 Metadata: DUP 32.00MiB 00:07:53.410 System: DUP 8.00MiB 00:07:53.410 SSD detected: yes 00:07:53.410 Zoned device: no 00:07:53.410 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:53.410 Runtime features: free-space-tree 00:07:53.410 Checksum: crc32c 00:07:53.410 Number of devices: 1 00:07:53.410 Devices: 00:07:53.410 ID SIZE PATH 00:07:53.410 1 510.00MiB /dev/nvme0n1p1 00:07:53.410 00:07:53.410 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:53.410 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:53.978 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:53.978 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:53.978 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:53.978 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:53.978 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:53.978 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:53.978 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2234241 00:07:53.978 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:53.978 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:53.978 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:53.978 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:53.978 00:07:53.978 real 0m0.883s 00:07:53.978 user 0m0.042s 00:07:53.978 sys 0m0.107s 00:07:53.978 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.978 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:53.978 ************************************ 00:07:53.978 END TEST filesystem_btrfs 00:07:53.978 ************************************ 00:07:53.978 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:53.978 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:53.978 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:53.978 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.978 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.237 ************************************ 00:07:54.237 START TEST filesystem_xfs 00:07:54.237 ************************************ 00:07:54.237 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:54.237 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:54.237 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:54.237 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:54.237 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:54.237 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:54.237 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:54.237 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:54.237 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:54.237 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:54.237 10:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:54.237 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:54.237 = sectsz=512 attr=2, projid32bit=1 00:07:54.237 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:54.237 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:54.237 data = bsize=4096 blocks=130560, imaxpct=25 00:07:54.237 = sunit=0 swidth=0 blks 00:07:54.237 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:54.237 log =internal log bsize=4096 blocks=16384, version=2 00:07:54.237 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:54.237 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:55.175 Discarding blocks...Done. 00:07:55.175 10:16:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:55.175 10:16:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:57.084 10:16:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:57.084 10:16:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:57.084 10:16:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:57.084 10:16:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:57.084 10:16:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:57.084 10:16:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:57.084 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2234241 00:07:57.084 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:57.084 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:57.084 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:57.084 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:57.084 00:07:57.084 real 0m3.055s 00:07:57.084 user 0m0.024s 00:07:57.084 sys 0m0.071s 00:07:57.084 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.084 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:57.084 ************************************ 00:07:57.084 END TEST filesystem_xfs 00:07:57.084 ************************************ 00:07:57.344 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:57.344 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:57.344 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:57.344 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:57.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:57.344 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:57.344 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:57.344 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:57.344 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:57.344 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:57.344 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:57.344 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:57.344 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:57.344 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.344 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.344 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.344 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:57.344 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2234241 00:07:57.344 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2234241 ']' 00:07:57.344 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2234241 00:07:57.344 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:57.344 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:57.344 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2234241 00:07:57.344 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:57.344 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:57.344 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2234241' 00:07:57.344 killing process with pid 2234241 00:07:57.344 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 2234241 00:07:57.344 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 2234241 00:07:57.914 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:57.914 00:07:57.914 real 0m13.713s 00:07:57.914 user 0m53.997s 00:07:57.914 sys 0m1.261s 00:07:57.914 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.914 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.914 ************************************ 00:07:57.914 END TEST nvmf_filesystem_no_in_capsule 00:07:57.914 ************************************ 00:07:57.914 10:16:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:57.914 10:16:42 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:57.914 10:16:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:57.914 10:16:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.914 10:16:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:57.914 ************************************ 00:07:57.914 START TEST nvmf_filesystem_in_capsule 00:07:57.914 ************************************ 00:07:57.914 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:57.914 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:57.914 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:57.914 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:57.914 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:57.914 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.914 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2236671 00:07:57.914 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2236671 00:07:57.914 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:57.914 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2236671 ']' 00:07:57.914 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.914 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:57.914 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.914 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:57.914 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.914 [2024-07-14 10:16:42.747814] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:57.914 [2024-07-14 10:16:42.747860] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.914 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.914 [2024-07-14 10:16:42.821088] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:57.914 [2024-07-14 10:16:42.861888] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:57.914 [2024-07-14 10:16:42.861926] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:57.914 [2024-07-14 10:16:42.861934] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:57.914 [2024-07-14 10:16:42.861940] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:57.914 [2024-07-14 10:16:42.861947] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:57.914 [2024-07-14 10:16:42.862267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.914 [2024-07-14 10:16:42.862301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.914 [2024-07-14 10:16:42.862405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.914 [2024-07-14 10:16:42.862406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:58.174 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:58.174 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:58.174 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:58.174 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:58.174 10:16:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.174 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:58.174 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:58.174 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:58.174 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.174 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.174 [2024-07-14 10:16:43.011330] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:58.174 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.174 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:58.174 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.174 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.174 Malloc1 00:07:58.174 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.174 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:58.174 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.174 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.174 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.174 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:58.174 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.174 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.174 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.174 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:58.174 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.174 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.433 [2024-07-14 10:16:43.157722] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:58.433 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.433 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:58.433 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:58.433 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:58.433 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:58.433 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:58.433 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:58.433 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.433 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.433 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.433 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:58.433 { 00:07:58.433 "name": "Malloc1", 00:07:58.433 "aliases": [ 00:07:58.433 "5f045ab9-67b1-4e08-bb86-d8dbdf19d742" 00:07:58.433 ], 00:07:58.433 "product_name": "Malloc disk", 00:07:58.433 "block_size": 512, 00:07:58.433 "num_blocks": 1048576, 00:07:58.433 "uuid": "5f045ab9-67b1-4e08-bb86-d8dbdf19d742", 00:07:58.433 "assigned_rate_limits": { 00:07:58.433 "rw_ios_per_sec": 0, 00:07:58.433 "rw_mbytes_per_sec": 0, 00:07:58.433 "r_mbytes_per_sec": 0, 00:07:58.433 "w_mbytes_per_sec": 0 00:07:58.433 }, 00:07:58.433 "claimed": true, 00:07:58.433 "claim_type": "exclusive_write", 00:07:58.433 "zoned": false, 00:07:58.433 "supported_io_types": { 00:07:58.433 "read": true, 00:07:58.433 "write": true, 00:07:58.433 "unmap": true, 00:07:58.433 "flush": true, 00:07:58.433 "reset": true, 00:07:58.433 "nvme_admin": false, 00:07:58.433 "nvme_io": false, 00:07:58.433 "nvme_io_md": false, 00:07:58.433 "write_zeroes": true, 00:07:58.433 "zcopy": true, 00:07:58.433 "get_zone_info": false, 00:07:58.433 "zone_management": false, 00:07:58.433 "zone_append": false, 00:07:58.433 "compare": false, 00:07:58.433 "compare_and_write": false, 00:07:58.433 "abort": true, 00:07:58.433 "seek_hole": false, 00:07:58.433 "seek_data": false, 00:07:58.433 "copy": true, 00:07:58.433 "nvme_iov_md": false 00:07:58.433 }, 00:07:58.433 "memory_domains": [ 00:07:58.433 { 00:07:58.433 "dma_device_id": "system", 00:07:58.433 "dma_device_type": 1 00:07:58.433 }, 00:07:58.433 { 00:07:58.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.433 "dma_device_type": 2 00:07:58.433 } 00:07:58.433 ], 00:07:58.433 "driver_specific": {} 00:07:58.433 } 00:07:58.433 ]' 00:07:58.433 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:58.433 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:58.433 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:58.433 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:58.433 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:58.433 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:58.433 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:58.433 10:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:59.814 10:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:59.815 10:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:59.815 10:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:59.815 10:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:59.815 10:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:01.785 10:16:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:01.785 10:16:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:01.785 10:16:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:01.785 10:16:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:01.785 10:16:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:01.785 10:16:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:01.785 10:16:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:01.785 10:16:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:01.785 10:16:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:01.785 10:16:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:01.785 10:16:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:01.785 10:16:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:01.785 10:16:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:01.785 10:16:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:01.785 10:16:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:01.785 10:16:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:01.785 10:16:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:01.785 10:16:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:02.354 10:16:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:03.292 10:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:03.292 10:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:03.292 10:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:03.292 10:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.292 10:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.292 ************************************ 00:08:03.292 START TEST filesystem_in_capsule_ext4 00:08:03.292 ************************************ 00:08:03.292 10:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:03.292 10:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:03.292 10:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:03.292 10:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:03.292 10:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:03.292 10:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:03.292 10:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:03.292 10:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:03.292 10:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:03.292 10:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:03.292 10:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:03.292 mke2fs 1.46.5 (30-Dec-2021) 00:08:03.292 Discarding device blocks: 0/522240 done 00:08:03.292 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:03.292 Filesystem UUID: 39accfc9-00b9-43b3-8a3b-e9ee3778ebf0 00:08:03.292 Superblock backups stored on blocks: 00:08:03.292 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:03.292 00:08:03.292 Allocating group tables: 0/64 done 00:08:03.292 Writing inode tables: 0/64 done 00:08:03.551 Creating journal (8192 blocks): done 00:08:04.379 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:08:04.379 00:08:04.379 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:04.379 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:04.639 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:04.639 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:04.639 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:04.639 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:04.639 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:04.639 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:04.639 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2236671 00:08:04.639 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:04.639 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:04.639 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:04.639 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:04.639 00:08:04.639 real 0m1.503s 00:08:04.639 user 0m0.026s 00:08:04.639 sys 0m0.063s 00:08:04.639 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.639 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:04.639 ************************************ 00:08:04.639 END TEST filesystem_in_capsule_ext4 00:08:04.639 ************************************ 00:08:04.898 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:04.898 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:04.898 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:04.898 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.898 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.898 ************************************ 00:08:04.898 START TEST filesystem_in_capsule_btrfs 00:08:04.898 ************************************ 00:08:04.898 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:04.898 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:04.898 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:04.899 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:04.899 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:04.899 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:04.899 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:04.899 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:04.899 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:04.899 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:04.899 10:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:05.158 btrfs-progs v6.6.2 00:08:05.158 See https://btrfs.readthedocs.io for more information. 00:08:05.158 00:08:05.158 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:05.158 NOTE: several default settings have changed in version 5.15, please make sure 00:08:05.158 this does not affect your deployments: 00:08:05.158 - DUP for metadata (-m dup) 00:08:05.158 - enabled no-holes (-O no-holes) 00:08:05.158 - enabled free-space-tree (-R free-space-tree) 00:08:05.158 00:08:05.158 Label: (null) 00:08:05.158 UUID: 90d2cef0-79c3-40de-8a9b-516e21c21c0c 00:08:05.158 Node size: 16384 00:08:05.158 Sector size: 4096 00:08:05.158 Filesystem size: 510.00MiB 00:08:05.158 Block group profiles: 00:08:05.158 Data: single 8.00MiB 00:08:05.158 Metadata: DUP 32.00MiB 00:08:05.158 System: DUP 8.00MiB 00:08:05.158 SSD detected: yes 00:08:05.158 Zoned device: no 00:08:05.158 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:05.158 Runtime features: free-space-tree 00:08:05.158 Checksum: crc32c 00:08:05.158 Number of devices: 1 00:08:05.158 Devices: 00:08:05.158 ID SIZE PATH 00:08:05.158 1 510.00MiB /dev/nvme0n1p1 00:08:05.158 00:08:05.158 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:05.158 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:05.417 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:05.417 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:05.417 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:05.417 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:05.417 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:05.417 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:05.417 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2236671 00:08:05.417 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:05.417 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:05.417 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:05.417 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:05.417 00:08:05.417 real 0m0.721s 00:08:05.417 user 0m0.034s 00:08:05.417 sys 0m0.119s 00:08:05.417 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.417 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:05.417 ************************************ 00:08:05.417 END TEST filesystem_in_capsule_btrfs 00:08:05.418 ************************************ 00:08:05.677 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:05.677 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:05.677 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:05.677 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.677 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.677 ************************************ 00:08:05.677 START TEST filesystem_in_capsule_xfs 00:08:05.677 ************************************ 00:08:05.677 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:05.677 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:05.677 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:05.677 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:05.677 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:05.677 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:05.677 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:05.677 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:05.677 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:05.677 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:05.677 10:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:05.677 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:05.677 = sectsz=512 attr=2, projid32bit=1 00:08:05.677 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:05.677 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:05.677 data = bsize=4096 blocks=130560, imaxpct=25 00:08:05.677 = sunit=0 swidth=0 blks 00:08:05.677 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:05.677 log =internal log bsize=4096 blocks=16384, version=2 00:08:05.677 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:05.677 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:06.615 Discarding blocks...Done. 00:08:06.615 10:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:06.615 10:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:09.150 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:09.150 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:09.150 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:09.150 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:09.150 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:09.150 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:09.150 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2236671 00:08:09.150 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:09.150 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:09.150 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:09.150 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:09.410 00:08:09.410 real 0m3.673s 00:08:09.410 user 0m0.030s 00:08:09.410 sys 0m0.067s 00:08:09.410 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.410 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:09.410 ************************************ 00:08:09.410 END TEST filesystem_in_capsule_xfs 00:08:09.410 ************************************ 00:08:09.410 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:09.410 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:09.410 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:09.410 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:09.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.410 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:09.410 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:09.410 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:09.410 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:09.410 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:09.410 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:09.669 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:09.669 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:09.669 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.669 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.669 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.669 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:09.669 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2236671 00:08:09.669 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2236671 ']' 00:08:09.669 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2236671 00:08:09.669 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:09.669 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:09.669 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2236671 00:08:09.669 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:09.669 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:09.669 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2236671' 00:08:09.669 killing process with pid 2236671 00:08:09.669 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 2236671 00:08:09.669 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 2236671 00:08:09.927 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:09.927 00:08:09.927 real 0m12.097s 00:08:09.927 user 0m47.474s 00:08:09.927 sys 0m1.165s 00:08:09.927 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.927 10:16:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.927 ************************************ 00:08:09.927 END TEST nvmf_filesystem_in_capsule 00:08:09.927 ************************************ 00:08:09.927 10:16:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:09.927 10:16:54 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:09.927 10:16:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:09.927 10:16:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:09.927 10:16:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:09.927 10:16:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:09.927 10:16:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:09.927 10:16:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:09.928 rmmod nvme_tcp 00:08:09.928 rmmod nvme_fabrics 00:08:09.928 rmmod nvme_keyring 00:08:09.928 10:16:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:09.928 10:16:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:09.928 10:16:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:09.928 10:16:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:09.928 10:16:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:09.928 10:16:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:09.928 10:16:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:09.928 10:16:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:09.928 10:16:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:09.928 10:16:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.928 10:16:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.928 10:16:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.459 10:16:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:12.459 00:08:12.459 real 0m34.208s 00:08:12.459 user 1m43.312s 00:08:12.459 sys 0m6.986s 00:08:12.459 10:16:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.459 10:16:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:12.459 ************************************ 00:08:12.459 END TEST nvmf_filesystem 00:08:12.459 ************************************ 00:08:12.459 10:16:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:12.459 10:16:56 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:12.459 10:16:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:12.459 10:16:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.459 10:16:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:12.459 ************************************ 00:08:12.459 START TEST nvmf_target_discovery 00:08:12.459 ************************************ 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:12.459 * Looking for test storage... 00:08:12.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:12.459 10:16:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.733 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:17.733 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:17.733 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:17.733 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:17.733 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:17.733 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:17.733 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:17.733 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:17.733 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:17.733 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:17.733 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:17.733 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:17.733 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:17.733 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:17.733 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:17.733 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:17.734 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:17.734 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:17.734 Found net devices under 0000:86:00.0: cvl_0_0 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:17.734 Found net devices under 0000:86:00.1: cvl_0_1 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:17.734 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:17.993 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:17.993 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:17.993 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:17.993 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:17.993 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:17.993 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:17.993 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:17.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:08:17.993 00:08:17.993 --- 10.0.0.2 ping statistics --- 00:08:17.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.993 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:08:17.993 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:17.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:08:17.993 00:08:17.993 --- 10.0.0.1 ping statistics --- 00:08:17.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.993 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:08:17.993 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.993 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:17.993 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:17.993 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.993 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:17.993 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:17.993 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.993 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:17.993 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:17.993 10:17:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:17.993 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:17.993 10:17:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:17.993 10:17:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.252 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2242473 00:08:18.252 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2242473 00:08:18.252 10:17:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:18.252 10:17:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 2242473 ']' 00:08:18.252 10:17:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.252 10:17:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:18.252 10:17:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.252 10:17:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:18.252 10:17:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.252 [2024-07-14 10:17:03.028164] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:08:18.252 [2024-07-14 10:17:03.028209] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.252 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.252 [2024-07-14 10:17:03.099846] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.252 [2024-07-14 10:17:03.141561] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:18.252 [2024-07-14 10:17:03.141598] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:18.252 [2024-07-14 10:17:03.141606] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:18.252 [2024-07-14 10:17:03.141612] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:18.252 [2024-07-14 10:17:03.141617] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:18.252 [2024-07-14 10:17:03.141678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.252 [2024-07-14 10:17:03.141789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.252 [2024-07-14 10:17:03.141895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.252 [2024-07-14 10:17:03.141896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.510 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:18.510 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:18.510 10:17:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:18.510 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:18.510 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.510 10:17:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.510 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:18.510 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.510 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.510 [2024-07-14 10:17:03.283187] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.510 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.510 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:18.510 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:18.510 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:18.510 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.510 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.510 Null1 00:08:18.510 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.510 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:18.510 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.510 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.510 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.510 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:18.510 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.510 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.510 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.510 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:18.510 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.510 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.511 [2024-07-14 10:17:03.328744] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.511 Null2 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.511 Null3 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.511 Null4 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.511 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:08:18.805 00:08:18.805 Discovery Log Number of Records 6, Generation counter 6 00:08:18.805 =====Discovery Log Entry 0====== 00:08:18.805 trtype: tcp 00:08:18.805 adrfam: ipv4 00:08:18.805 subtype: current discovery subsystem 00:08:18.805 treq: not required 00:08:18.805 portid: 0 00:08:18.805 trsvcid: 4420 00:08:18.805 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:18.805 traddr: 10.0.0.2 00:08:18.805 eflags: explicit discovery connections, duplicate discovery information 00:08:18.805 sectype: none 00:08:18.805 =====Discovery Log Entry 1====== 00:08:18.805 trtype: tcp 00:08:18.805 adrfam: ipv4 00:08:18.805 subtype: nvme subsystem 00:08:18.805 treq: not required 00:08:18.805 portid: 0 00:08:18.805 trsvcid: 4420 00:08:18.805 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:18.805 traddr: 10.0.0.2 00:08:18.805 eflags: none 00:08:18.805 sectype: none 00:08:18.805 =====Discovery Log Entry 2====== 00:08:18.805 trtype: tcp 00:08:18.805 adrfam: ipv4 00:08:18.805 subtype: nvme subsystem 00:08:18.805 treq: not required 00:08:18.805 portid: 0 00:08:18.805 trsvcid: 4420 00:08:18.805 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:18.805 traddr: 10.0.0.2 00:08:18.805 eflags: none 00:08:18.805 sectype: none 00:08:18.805 =====Discovery Log Entry 3====== 00:08:18.805 trtype: tcp 00:08:18.805 adrfam: ipv4 00:08:18.805 subtype: nvme subsystem 00:08:18.805 treq: not required 00:08:18.805 portid: 0 00:08:18.805 trsvcid: 4420 00:08:18.805 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:18.805 traddr: 10.0.0.2 00:08:18.805 eflags: none 00:08:18.805 sectype: none 00:08:18.805 =====Discovery Log Entry 4====== 00:08:18.805 trtype: tcp 00:08:18.805 adrfam: ipv4 00:08:18.805 subtype: nvme subsystem 00:08:18.805 treq: not required 00:08:18.805 portid: 0 00:08:18.805 trsvcid: 4420 00:08:18.805 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:18.805 traddr: 10.0.0.2 00:08:18.805 eflags: none 00:08:18.805 sectype: none 00:08:18.805 =====Discovery Log Entry 5====== 00:08:18.805 trtype: tcp 00:08:18.805 adrfam: ipv4 00:08:18.805 subtype: discovery subsystem referral 00:08:18.805 treq: not required 00:08:18.805 portid: 0 00:08:18.805 trsvcid: 4430 00:08:18.805 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:18.805 traddr: 10.0.0.2 00:08:18.805 eflags: none 00:08:18.805 sectype: none 00:08:18.805 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:18.805 Perform nvmf subsystem discovery via RPC 00:08:18.805 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:18.805 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.805 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.805 [ 00:08:18.805 { 00:08:18.805 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:18.805 "subtype": "Discovery", 00:08:18.805 "listen_addresses": [ 00:08:18.805 { 00:08:18.805 "trtype": "TCP", 00:08:18.805 "adrfam": "IPv4", 00:08:18.805 "traddr": "10.0.0.2", 00:08:18.805 "trsvcid": "4420" 00:08:18.805 } 00:08:18.805 ], 00:08:18.805 "allow_any_host": true, 00:08:18.805 "hosts": [] 00:08:18.805 }, 00:08:18.805 { 00:08:18.805 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:18.805 "subtype": "NVMe", 00:08:18.805 "listen_addresses": [ 00:08:18.805 { 00:08:18.805 "trtype": "TCP", 00:08:18.805 "adrfam": "IPv4", 00:08:18.805 "traddr": "10.0.0.2", 00:08:18.805 "trsvcid": "4420" 00:08:18.805 } 00:08:18.805 ], 00:08:18.805 "allow_any_host": true, 00:08:18.805 "hosts": [], 00:08:18.805 "serial_number": "SPDK00000000000001", 00:08:18.805 "model_number": "SPDK bdev Controller", 00:08:18.805 "max_namespaces": 32, 00:08:18.805 "min_cntlid": 1, 00:08:18.805 "max_cntlid": 65519, 00:08:18.805 "namespaces": [ 00:08:18.805 { 00:08:18.805 "nsid": 1, 00:08:18.805 "bdev_name": "Null1", 00:08:18.805 "name": "Null1", 00:08:18.805 "nguid": "B1D5FF7D4B78464B887387629120821B", 00:08:18.805 "uuid": "b1d5ff7d-4b78-464b-8873-87629120821b" 00:08:18.805 } 00:08:18.805 ] 00:08:18.805 }, 00:08:18.805 { 00:08:18.805 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:18.805 "subtype": "NVMe", 00:08:18.805 "listen_addresses": [ 00:08:18.805 { 00:08:18.805 "trtype": "TCP", 00:08:18.805 "adrfam": "IPv4", 00:08:18.805 "traddr": "10.0.0.2", 00:08:18.805 "trsvcid": "4420" 00:08:18.805 } 00:08:18.805 ], 00:08:18.805 "allow_any_host": true, 00:08:18.805 "hosts": [], 00:08:18.805 "serial_number": "SPDK00000000000002", 00:08:18.805 "model_number": "SPDK bdev Controller", 00:08:18.805 "max_namespaces": 32, 00:08:18.805 "min_cntlid": 1, 00:08:18.805 "max_cntlid": 65519, 00:08:18.805 "namespaces": [ 00:08:18.805 { 00:08:18.805 "nsid": 1, 00:08:18.805 "bdev_name": "Null2", 00:08:18.805 "name": "Null2", 00:08:18.805 "nguid": "C74165850E484747AE7492CFF2D39B5A", 00:08:18.805 "uuid": "c7416585-0e48-4747-ae74-92cff2d39b5a" 00:08:18.805 } 00:08:18.805 ] 00:08:18.805 }, 00:08:18.805 { 00:08:18.805 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:18.805 "subtype": "NVMe", 00:08:18.805 "listen_addresses": [ 00:08:18.805 { 00:08:18.805 "trtype": "TCP", 00:08:18.805 "adrfam": "IPv4", 00:08:18.805 "traddr": "10.0.0.2", 00:08:18.805 "trsvcid": "4420" 00:08:18.805 } 00:08:18.805 ], 00:08:18.805 "allow_any_host": true, 00:08:18.805 "hosts": [], 00:08:18.805 "serial_number": "SPDK00000000000003", 00:08:18.805 "model_number": "SPDK bdev Controller", 00:08:18.805 "max_namespaces": 32, 00:08:18.805 "min_cntlid": 1, 00:08:18.805 "max_cntlid": 65519, 00:08:18.805 "namespaces": [ 00:08:18.805 { 00:08:18.805 "nsid": 1, 00:08:18.805 "bdev_name": "Null3", 00:08:18.805 "name": "Null3", 00:08:18.805 "nguid": "28915E7F290440198EBB75B188262BDA", 00:08:18.805 "uuid": "28915e7f-2904-4019-8ebb-75b188262bda" 00:08:18.805 } 00:08:18.805 ] 00:08:18.805 }, 00:08:18.805 { 00:08:18.805 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:18.805 "subtype": "NVMe", 00:08:18.805 "listen_addresses": [ 00:08:18.805 { 00:08:18.805 "trtype": "TCP", 00:08:18.805 "adrfam": "IPv4", 00:08:18.805 "traddr": "10.0.0.2", 00:08:18.805 "trsvcid": "4420" 00:08:18.805 } 00:08:18.805 ], 00:08:18.805 "allow_any_host": true, 00:08:18.805 "hosts": [], 00:08:18.805 "serial_number": "SPDK00000000000004", 00:08:18.805 "model_number": "SPDK bdev Controller", 00:08:18.805 "max_namespaces": 32, 00:08:18.805 "min_cntlid": 1, 00:08:18.805 "max_cntlid": 65519, 00:08:18.805 "namespaces": [ 00:08:18.805 { 00:08:18.805 "nsid": 1, 00:08:18.805 "bdev_name": "Null4", 00:08:18.805 "name": "Null4", 00:08:18.805 "nguid": "C87090A37CBD4A8AB022EB2DCCD38DF5", 00:08:18.805 "uuid": "c87090a3-7cbd-4a8a-b022-eb2dccd38df5" 00:08:18.805 } 00:08:18.805 ] 00:08:18.805 } 00:08:18.805 ] 00:08:18.805 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.805 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:18.805 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:18.805 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:18.805 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.805 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.805 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.805 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:18.805 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.805 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.805 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.805 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:18.805 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:18.805 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.805 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:18.806 rmmod nvme_tcp 00:08:18.806 rmmod nvme_fabrics 00:08:18.806 rmmod nvme_keyring 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2242473 ']' 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2242473 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 2242473 ']' 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 2242473 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:18.806 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2242473 00:08:19.085 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:19.085 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:19.085 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2242473' 00:08:19.085 killing process with pid 2242473 00:08:19.085 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 2242473 00:08:19.085 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 2242473 00:08:19.085 10:17:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:19.085 10:17:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:19.085 10:17:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:19.085 10:17:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:19.085 10:17:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:19.085 10:17:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.085 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:19.085 10:17:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.621 10:17:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:21.621 00:08:21.621 real 0m9.042s 00:08:21.621 user 0m5.062s 00:08:21.621 sys 0m4.721s 00:08:21.622 10:17:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.622 10:17:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.622 ************************************ 00:08:21.622 END TEST nvmf_target_discovery 00:08:21.622 ************************************ 00:08:21.622 10:17:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:21.622 10:17:06 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:21.622 10:17:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:21.622 10:17:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.622 10:17:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:21.622 ************************************ 00:08:21.622 START TEST nvmf_referrals 00:08:21.622 ************************************ 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:21.622 * Looking for test storage... 00:08:21.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:21.622 10:17:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:26.899 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:26.899 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:26.899 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:26.900 Found net devices under 0000:86:00.0: cvl_0_0 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:26.900 Found net devices under 0000:86:00.1: cvl_0_1 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:26.900 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:27.160 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:27.160 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:27.160 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:27.160 10:17:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:27.160 10:17:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:27.160 10:17:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:27.160 10:17:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:27.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:27.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:08:27.160 00:08:27.160 --- 10.0.0.2 ping statistics --- 00:08:27.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.160 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:08:27.160 10:17:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:27.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:27.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:08:27.160 00:08:27.160 --- 10.0.0.1 ping statistics --- 00:08:27.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.160 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:08:27.160 10:17:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:27.160 10:17:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:27.160 10:17:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:27.160 10:17:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:27.160 10:17:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:27.160 10:17:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:27.160 10:17:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:27.160 10:17:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:27.160 10:17:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:27.160 10:17:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:27.160 10:17:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:27.160 10:17:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:27.160 10:17:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.160 10:17:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2246038 00:08:27.160 10:17:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2246038 00:08:27.160 10:17:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:27.160 10:17:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 2246038 ']' 00:08:27.160 10:17:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.160 10:17:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:27.160 10:17:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.160 10:17:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:27.160 10:17:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.420 [2024-07-14 10:17:12.175264] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:08:27.420 [2024-07-14 10:17:12.175315] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.420 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.420 [2024-07-14 10:17:12.251013] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:27.420 [2024-07-14 10:17:12.291628] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.420 [2024-07-14 10:17:12.291670] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.420 [2024-07-14 10:17:12.291677] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.420 [2024-07-14 10:17:12.291683] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.420 [2024-07-14 10:17:12.291688] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.420 [2024-07-14 10:17:12.291748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.420 [2024-07-14 10:17:12.291790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.420 [2024-07-14 10:17:12.291873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.420 [2024-07-14 10:17:12.291874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:28.355 10:17:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:28.355 10:17:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:28.355 10:17:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:28.355 10:17:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:28.355 10:17:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.355 [2024-07-14 10:17:13.021334] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.355 [2024-07-14 10:17:13.034709] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:28.355 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.356 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.356 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:28.356 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.356 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:28.356 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:28.356 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:28.356 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:28.356 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:28.356 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:28.356 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:28.356 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.614 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:28.615 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:28.615 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:28.615 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:28.615 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.615 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:28.615 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.615 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.615 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:28.615 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:28.615 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:28.615 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:28.615 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:28.615 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:28.615 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:28.615 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:28.874 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:28.874 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:28.874 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:28.874 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:28.874 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:28.874 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:28.874 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:28.874 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:28.874 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:28.874 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:28.874 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:28.874 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:28.874 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:29.133 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:29.134 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:29.134 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.134 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:29.134 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.134 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:29.134 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:29.134 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:29.134 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:29.134 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.134 10:17:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:29.134 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:29.134 10:17:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.134 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:29.134 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:29.134 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:29.134 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:29.134 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:29.134 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:29.134 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:29.134 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:29.393 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:29.393 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:29.393 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:29.393 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:29.393 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:29.393 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:29.393 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:29.393 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:29.393 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:29.393 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:29.393 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:29.393 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:29.393 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:29.652 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:29.652 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:29.652 10:17:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.652 10:17:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:29.652 10:17:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.652 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:29.652 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:29.652 10:17:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.652 10:17:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:29.652 10:17:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.652 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:29.652 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:29.652 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:29.652 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:29.652 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:29.652 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:29.652 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:29.912 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:29.912 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:29.912 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:29.912 10:17:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:29.912 10:17:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:29.912 10:17:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:29.912 10:17:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:29.912 10:17:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:29.912 10:17:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:29.912 10:17:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:29.912 rmmod nvme_tcp 00:08:29.912 rmmod nvme_fabrics 00:08:29.912 rmmod nvme_keyring 00:08:29.912 10:17:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:29.912 10:17:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:29.912 10:17:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:29.912 10:17:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2246038 ']' 00:08:29.912 10:17:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2246038 00:08:29.912 10:17:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 2246038 ']' 00:08:29.912 10:17:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 2246038 00:08:29.912 10:17:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:29.912 10:17:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:29.912 10:17:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2246038 00:08:29.912 10:17:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:29.912 10:17:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:29.912 10:17:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2246038' 00:08:29.912 killing process with pid 2246038 00:08:29.912 10:17:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 2246038 00:08:29.912 10:17:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 2246038 00:08:30.171 10:17:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:30.171 10:17:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:30.171 10:17:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:30.171 10:17:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:30.171 10:17:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:30.171 10:17:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.171 10:17:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:30.171 10:17:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.078 10:17:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:32.078 00:08:32.078 real 0m10.869s 00:08:32.078 user 0m12.966s 00:08:32.078 sys 0m5.123s 00:08:32.078 10:17:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:32.078 10:17:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.078 ************************************ 00:08:32.078 END TEST nvmf_referrals 00:08:32.078 ************************************ 00:08:32.078 10:17:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:32.078 10:17:17 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:32.078 10:17:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:32.078 10:17:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.078 10:17:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:32.337 ************************************ 00:08:32.337 START TEST nvmf_connect_disconnect 00:08:32.337 ************************************ 00:08:32.337 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:32.337 * Looking for test storage... 00:08:32.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:32.337 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:32.337 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:32.337 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.337 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.337 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.337 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.337 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.337 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.337 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.337 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.337 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.337 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.337 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:32.337 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:32.337 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:32.338 10:17:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:38.909 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:38.909 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:38.909 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:38.909 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:38.909 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:38.909 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:38.909 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:38.909 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:38.909 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:38.909 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:38.909 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:38.909 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:38.909 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:38.909 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:38.909 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:38.909 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.909 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.909 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.909 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.909 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.909 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.909 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.909 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.909 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.909 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:38.910 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:38.910 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:38.910 Found net devices under 0000:86:00.0: cvl_0_0 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:38.910 Found net devices under 0000:86:00.1: cvl_0_1 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:38.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:08:38.910 00:08:38.910 --- 10.0.0.2 ping statistics --- 00:08:38.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.910 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:38.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:08:38.910 00:08:38.910 --- 10.0.0.1 ping statistics --- 00:08:38.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.910 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:38.910 10:17:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:38.910 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:38.910 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:38.910 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:38.910 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:38.910 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2250116 00:08:38.910 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2250116 00:08:38.910 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:38.910 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 2250116 ']' 00:08:38.910 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.910 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:38.910 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.910 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:38.910 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:38.910 [2024-07-14 10:17:23.079326] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:08:38.910 [2024-07-14 10:17:23.079374] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.910 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.910 [2024-07-14 10:17:23.148083] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.910 [2024-07-14 10:17:23.189924] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.910 [2024-07-14 10:17:23.189962] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.910 [2024-07-14 10:17:23.189969] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:38.910 [2024-07-14 10:17:23.189975] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:38.910 [2024-07-14 10:17:23.189980] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.910 [2024-07-14 10:17:23.190089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.910 [2024-07-14 10:17:23.190197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.910 [2024-07-14 10:17:23.190305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.911 [2024-07-14 10:17:23.190305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:38.911 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:38.911 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:38.911 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:38.911 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:38.911 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:39.169 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.169 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:39.169 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.169 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:39.169 [2024-07-14 10:17:23.932168] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.169 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.169 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:39.169 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.169 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:39.169 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.169 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:39.169 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:39.169 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.169 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:39.169 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.169 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:39.169 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.169 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:39.169 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.170 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:39.170 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.170 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:39.170 [2024-07-14 10:17:23.983736] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:39.170 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.170 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:39.170 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:39.170 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:39.170 10:17:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:41.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.439 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.982 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.720 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.675 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.164 10:21:13 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:29.165 10:21:13 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:29.165 10:21:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:29.165 10:21:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:29.165 10:21:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:29.165 10:21:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:29.165 10:21:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:29.165 10:21:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:29.165 rmmod nvme_tcp 00:12:29.165 rmmod nvme_fabrics 00:12:29.165 rmmod nvme_keyring 00:12:29.165 10:21:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:29.165 10:21:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:29.165 10:21:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:29.165 10:21:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2250116 ']' 00:12:29.165 10:21:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2250116 00:12:29.165 10:21:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2250116 ']' 00:12:29.165 10:21:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 2250116 00:12:29.165 10:21:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:12:29.165 10:21:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:29.165 10:21:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2250116 00:12:29.165 10:21:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:29.165 10:21:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:29.165 10:21:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2250116' 00:12:29.165 killing process with pid 2250116 00:12:29.165 10:21:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 2250116 00:12:29.165 10:21:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 2250116 00:12:29.423 10:21:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:29.423 10:21:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:29.423 10:21:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:29.423 10:21:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:29.423 10:21:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:29.423 10:21:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.423 10:21:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:29.423 10:21:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.323 10:21:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:31.323 00:12:31.323 real 3m59.224s 00:12:31.323 user 15m17.412s 00:12:31.323 sys 0m20.156s 00:12:31.323 10:21:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:31.323 10:21:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:31.323 ************************************ 00:12:31.323 END TEST nvmf_connect_disconnect 00:12:31.323 ************************************ 00:12:31.581 10:21:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:31.581 10:21:16 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:31.581 10:21:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:31.581 10:21:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:31.581 10:21:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:31.581 ************************************ 00:12:31.581 START TEST nvmf_multitarget 00:12:31.581 ************************************ 00:12:31.581 10:21:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:31.581 * Looking for test storage... 00:12:31.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.581 10:21:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.581 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:31.581 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.581 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.581 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.581 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.581 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.581 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.581 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.581 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.581 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.581 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.581 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:31.581 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:31.581 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.581 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.581 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:31.581 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.581 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:31.581 10:21:16 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.581 10:21:16 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.581 10:21:16 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.582 10:21:16 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.582 10:21:16 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.582 10:21:16 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.582 10:21:16 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:31.582 10:21:16 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.582 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:31.582 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:31.582 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:31.582 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.582 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.582 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.582 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:31.582 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:31.582 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:31.582 10:21:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:31.582 10:21:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:31.582 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:31.582 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.582 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:31.582 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:31.582 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:31.582 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.582 10:21:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.582 10:21:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.582 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:31.582 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:31.582 10:21:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:31.582 10:21:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:38.149 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:38.149 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:38.149 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:38.149 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:38.149 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:38.150 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:38.150 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:38.150 Found net devices under 0000:86:00.0: cvl_0_0 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:38.150 Found net devices under 0000:86:00.1: cvl_0_1 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:38.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:12:38.150 00:12:38.150 --- 10.0.0.2 ping statistics --- 00:12:38.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.150 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:38.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:12:38.150 00:12:38.150 --- 10.0.0.1 ping statistics --- 00:12:38.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.150 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2294092 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2294092 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 2294092 ']' 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:38.150 10:21:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:38.150 [2024-07-14 10:21:22.356618] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:12:38.151 [2024-07-14 10:21:22.356664] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.151 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.151 [2024-07-14 10:21:22.429460] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:38.151 [2024-07-14 10:21:22.470677] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.151 [2024-07-14 10:21:22.470717] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.151 [2024-07-14 10:21:22.470724] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.151 [2024-07-14 10:21:22.470730] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.151 [2024-07-14 10:21:22.470734] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.151 [2024-07-14 10:21:22.470797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.151 [2024-07-14 10:21:22.470830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.151 [2024-07-14 10:21:22.470916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.151 [2024-07-14 10:21:22.470917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:38.151 10:21:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:38.151 10:21:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:12:38.151 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:38.151 10:21:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:38.151 10:21:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:38.151 10:21:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.151 10:21:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:38.151 10:21:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:38.151 10:21:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:38.151 10:21:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:38.151 10:21:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:38.151 "nvmf_tgt_1" 00:12:38.151 10:21:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:38.151 "nvmf_tgt_2" 00:12:38.151 10:21:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:38.151 10:21:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:38.151 10:21:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:38.151 10:21:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:38.151 true 00:12:38.151 10:21:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:38.409 true 00:12:38.409 10:21:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:38.409 10:21:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:38.409 10:21:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:38.409 10:21:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:38.409 10:21:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:38.409 10:21:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:38.409 10:21:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:38.409 10:21:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:38.409 10:21:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:38.409 10:21:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:38.409 10:21:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:38.409 rmmod nvme_tcp 00:12:38.409 rmmod nvme_fabrics 00:12:38.409 rmmod nvme_keyring 00:12:38.409 10:21:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:38.409 10:21:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:38.409 10:21:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:38.409 10:21:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2294092 ']' 00:12:38.409 10:21:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2294092 00:12:38.409 10:21:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 2294092 ']' 00:12:38.409 10:21:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 2294092 00:12:38.409 10:21:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:12:38.409 10:21:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:38.409 10:21:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2294092 00:12:38.667 10:21:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:38.667 10:21:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:38.667 10:21:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2294092' 00:12:38.667 killing process with pid 2294092 00:12:38.667 10:21:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 2294092 00:12:38.667 10:21:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 2294092 00:12:38.667 10:21:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:38.667 10:21:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:38.667 10:21:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:38.667 10:21:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:38.667 10:21:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:38.667 10:21:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.667 10:21:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:38.667 10:21:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.205 10:21:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:41.205 00:12:41.205 real 0m9.286s 00:12:41.205 user 0m6.662s 00:12:41.205 sys 0m4.792s 00:12:41.205 10:21:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:41.205 10:21:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:41.205 ************************************ 00:12:41.205 END TEST nvmf_multitarget 00:12:41.205 ************************************ 00:12:41.205 10:21:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:41.205 10:21:25 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:41.205 10:21:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:41.205 10:21:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:41.205 10:21:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:41.205 ************************************ 00:12:41.205 START TEST nvmf_rpc 00:12:41.205 ************************************ 00:12:41.205 10:21:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:41.205 * Looking for test storage... 00:12:41.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:41.205 10:21:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:41.205 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:41.205 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:41.205 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:41.205 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:41.205 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:41.205 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:41.205 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:41.205 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:41.205 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:41.205 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:41.205 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:41.205 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:41.205 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:41.206 10:21:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.525 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:46.525 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:46.525 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:46.525 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:46.525 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:46.525 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:46.525 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:46.526 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:46.526 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:46.526 Found net devices under 0000:86:00.0: cvl_0_0 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:46.526 Found net devices under 0000:86:00.1: cvl_0_1 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:46.526 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:46.784 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:46.784 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:46.784 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:46.784 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:46.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:12:46.784 00:12:46.784 --- 10.0.0.2 ping statistics --- 00:12:46.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.784 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:12:46.784 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:46.784 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.784 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:12:46.784 00:12:46.784 --- 10.0.0.1 ping statistics --- 00:12:46.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.784 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:12:46.784 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.784 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:46.784 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:46.784 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.784 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:46.784 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:46.784 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.784 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:46.784 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:46.784 10:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:46.784 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:46.784 10:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:46.784 10:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.784 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2297747 00:12:46.784 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2297747 00:12:46.784 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:46.784 10:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 2297747 ']' 00:12:46.784 10:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.784 10:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:46.784 10:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.784 10:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:46.784 10:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.784 [2024-07-14 10:21:31.721584] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:12:46.784 [2024-07-14 10:21:31.721624] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.784 EAL: No free 2048 kB hugepages reported on node 1 00:12:47.042 [2024-07-14 10:21:31.780516] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:47.042 [2024-07-14 10:21:31.822422] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.042 [2024-07-14 10:21:31.822461] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.042 [2024-07-14 10:21:31.822468] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:47.042 [2024-07-14 10:21:31.822474] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:47.042 [2024-07-14 10:21:31.822479] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.042 [2024-07-14 10:21:31.822532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.042 [2024-07-14 10:21:31.822639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.042 [2024-07-14 10:21:31.822747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.042 [2024-07-14 10:21:31.822749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:47.042 10:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:47.042 10:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:12:47.042 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:47.042 10:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:47.042 10:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.042 10:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.042 10:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:47.042 10:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.042 10:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.042 10:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.042 10:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:47.042 "tick_rate": 2300000000, 00:12:47.042 "poll_groups": [ 00:12:47.042 { 00:12:47.042 "name": "nvmf_tgt_poll_group_000", 00:12:47.042 "admin_qpairs": 0, 00:12:47.042 "io_qpairs": 0, 00:12:47.042 "current_admin_qpairs": 0, 00:12:47.042 "current_io_qpairs": 0, 00:12:47.042 "pending_bdev_io": 0, 00:12:47.042 "completed_nvme_io": 0, 00:12:47.042 "transports": [] 00:12:47.042 }, 00:12:47.042 { 00:12:47.042 "name": "nvmf_tgt_poll_group_001", 00:12:47.042 "admin_qpairs": 0, 00:12:47.042 "io_qpairs": 0, 00:12:47.042 "current_admin_qpairs": 0, 00:12:47.042 "current_io_qpairs": 0, 00:12:47.042 "pending_bdev_io": 0, 00:12:47.042 "completed_nvme_io": 0, 00:12:47.042 "transports": [] 00:12:47.042 }, 00:12:47.042 { 00:12:47.042 "name": "nvmf_tgt_poll_group_002", 00:12:47.042 "admin_qpairs": 0, 00:12:47.042 "io_qpairs": 0, 00:12:47.042 "current_admin_qpairs": 0, 00:12:47.042 "current_io_qpairs": 0, 00:12:47.042 "pending_bdev_io": 0, 00:12:47.042 "completed_nvme_io": 0, 00:12:47.042 "transports": [] 00:12:47.042 }, 00:12:47.042 { 00:12:47.042 "name": "nvmf_tgt_poll_group_003", 00:12:47.042 "admin_qpairs": 0, 00:12:47.042 "io_qpairs": 0, 00:12:47.042 "current_admin_qpairs": 0, 00:12:47.042 "current_io_qpairs": 0, 00:12:47.043 "pending_bdev_io": 0, 00:12:47.043 "completed_nvme_io": 0, 00:12:47.043 "transports": [] 00:12:47.043 } 00:12:47.043 ] 00:12:47.043 }' 00:12:47.043 10:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:47.043 10:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:47.043 10:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:47.043 10:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:47.043 10:21:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:47.043 10:21:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.301 [2024-07-14 10:21:32.067530] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:47.301 "tick_rate": 2300000000, 00:12:47.301 "poll_groups": [ 00:12:47.301 { 00:12:47.301 "name": "nvmf_tgt_poll_group_000", 00:12:47.301 "admin_qpairs": 0, 00:12:47.301 "io_qpairs": 0, 00:12:47.301 "current_admin_qpairs": 0, 00:12:47.301 "current_io_qpairs": 0, 00:12:47.301 "pending_bdev_io": 0, 00:12:47.301 "completed_nvme_io": 0, 00:12:47.301 "transports": [ 00:12:47.301 { 00:12:47.301 "trtype": "TCP" 00:12:47.301 } 00:12:47.301 ] 00:12:47.301 }, 00:12:47.301 { 00:12:47.301 "name": "nvmf_tgt_poll_group_001", 00:12:47.301 "admin_qpairs": 0, 00:12:47.301 "io_qpairs": 0, 00:12:47.301 "current_admin_qpairs": 0, 00:12:47.301 "current_io_qpairs": 0, 00:12:47.301 "pending_bdev_io": 0, 00:12:47.301 "completed_nvme_io": 0, 00:12:47.301 "transports": [ 00:12:47.301 { 00:12:47.301 "trtype": "TCP" 00:12:47.301 } 00:12:47.301 ] 00:12:47.301 }, 00:12:47.301 { 00:12:47.301 "name": "nvmf_tgt_poll_group_002", 00:12:47.301 "admin_qpairs": 0, 00:12:47.301 "io_qpairs": 0, 00:12:47.301 "current_admin_qpairs": 0, 00:12:47.301 "current_io_qpairs": 0, 00:12:47.301 "pending_bdev_io": 0, 00:12:47.301 "completed_nvme_io": 0, 00:12:47.301 "transports": [ 00:12:47.301 { 00:12:47.301 "trtype": "TCP" 00:12:47.301 } 00:12:47.301 ] 00:12:47.301 }, 00:12:47.301 { 00:12:47.301 "name": "nvmf_tgt_poll_group_003", 00:12:47.301 "admin_qpairs": 0, 00:12:47.301 "io_qpairs": 0, 00:12:47.301 "current_admin_qpairs": 0, 00:12:47.301 "current_io_qpairs": 0, 00:12:47.301 "pending_bdev_io": 0, 00:12:47.301 "completed_nvme_io": 0, 00:12:47.301 "transports": [ 00:12:47.301 { 00:12:47.301 "trtype": "TCP" 00:12:47.301 } 00:12:47.301 ] 00:12:47.301 } 00:12:47.301 ] 00:12:47.301 }' 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.301 Malloc1 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.301 [2024-07-14 10:21:32.235186] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:47.301 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:47.301 [2024-07-14 10:21:32.263584] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:47.560 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:47.560 could not add new controller: failed to write to nvme-fabrics device 00:12:47.560 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:47.560 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:47.560 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:47.560 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:47.560 10:21:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:47.560 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.560 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.560 10:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.560 10:21:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.495 10:21:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:48.495 10:21:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:48.495 10:21:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:48.495 10:21:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:48.495 10:21:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:51.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:51.026 [2024-07-14 10:21:35.606992] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:51.026 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:51.026 could not add new controller: failed to write to nvme-fabrics device 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.026 10:21:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:51.961 10:21:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:51.961 10:21:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:51.961 10:21:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:51.961 10:21:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:51.961 10:21:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:53.865 10:21:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:53.865 10:21:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:53.865 10:21:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:53.865 10:21:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:53.865 10:21:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:53.865 10:21:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:53.865 10:21:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:54.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.125 [2024-07-14 10:21:38.943371] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.125 10:21:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.505 10:21:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:55.505 10:21:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:55.505 10:21:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.505 10:21:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:55.505 10:21:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:57.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.413 [2024-07-14 10:21:42.204647] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.413 10:21:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.804 10:21:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:58.804 10:21:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:58.804 10:21:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.804 10:21:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:58.804 10:21:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:00.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.706 [2024-07-14 10:21:45.518375] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.706 10:21:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:02.084 10:21:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:02.084 10:21:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:02.084 10:21:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.084 10:21:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:02.084 10:21:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:03.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.989 [2024-07-14 10:21:48.851061] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.989 10:21:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:05.367 10:21:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:05.367 10:21:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:05.367 10:21:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:05.367 10:21:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:05.367 10:21:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:07.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.270 [2024-07-14 10:21:52.237717] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.270 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.529 10:21:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.529 10:21:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:08.529 10:21:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:08.529 10:21:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:08.529 10:21:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.529 10:21:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:08.529 10:21:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:10.432 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:10.432 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:10.432 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:10.432 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:10.432 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.432 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:10.432 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:10.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.691 [2024-07-14 10:21:55.531183] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.691 [2024-07-14 10:21:55.579292] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.691 [2024-07-14 10:21:55.631438] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.691 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:10.692 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.692 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.692 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.692 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.692 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.692 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.692 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.692 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.692 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.692 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.692 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.692 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:10.692 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:10.692 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.692 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.951 [2024-07-14 10:21:55.679588] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.951 [2024-07-14 10:21:55.727775] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:10.951 "tick_rate": 2300000000, 00:13:10.951 "poll_groups": [ 00:13:10.951 { 00:13:10.951 "name": "nvmf_tgt_poll_group_000", 00:13:10.951 "admin_qpairs": 2, 00:13:10.951 "io_qpairs": 168, 00:13:10.951 "current_admin_qpairs": 0, 00:13:10.951 "current_io_qpairs": 0, 00:13:10.951 "pending_bdev_io": 0, 00:13:10.951 "completed_nvme_io": 267, 00:13:10.951 "transports": [ 00:13:10.951 { 00:13:10.951 "trtype": "TCP" 00:13:10.951 } 00:13:10.951 ] 00:13:10.951 }, 00:13:10.951 { 00:13:10.951 "name": "nvmf_tgt_poll_group_001", 00:13:10.951 "admin_qpairs": 2, 00:13:10.951 "io_qpairs": 168, 00:13:10.951 "current_admin_qpairs": 0, 00:13:10.951 "current_io_qpairs": 0, 00:13:10.951 "pending_bdev_io": 0, 00:13:10.951 "completed_nvme_io": 267, 00:13:10.951 "transports": [ 00:13:10.951 { 00:13:10.951 "trtype": "TCP" 00:13:10.951 } 00:13:10.951 ] 00:13:10.951 }, 00:13:10.951 { 00:13:10.951 "name": "nvmf_tgt_poll_group_002", 00:13:10.951 "admin_qpairs": 1, 00:13:10.951 "io_qpairs": 168, 00:13:10.951 "current_admin_qpairs": 0, 00:13:10.951 "current_io_qpairs": 0, 00:13:10.951 "pending_bdev_io": 0, 00:13:10.951 "completed_nvme_io": 219, 00:13:10.951 "transports": [ 00:13:10.951 { 00:13:10.951 "trtype": "TCP" 00:13:10.951 } 00:13:10.951 ] 00:13:10.951 }, 00:13:10.951 { 00:13:10.951 "name": "nvmf_tgt_poll_group_003", 00:13:10.951 "admin_qpairs": 2, 00:13:10.951 "io_qpairs": 168, 00:13:10.951 "current_admin_qpairs": 0, 00:13:10.951 "current_io_qpairs": 0, 00:13:10.951 "pending_bdev_io": 0, 00:13:10.951 "completed_nvme_io": 269, 00:13:10.951 "transports": [ 00:13:10.951 { 00:13:10.951 "trtype": "TCP" 00:13:10.951 } 00:13:10.951 ] 00:13:10.951 } 00:13:10.951 ] 00:13:10.951 }' 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:10.951 rmmod nvme_tcp 00:13:10.951 rmmod nvme_fabrics 00:13:10.951 rmmod nvme_keyring 00:13:10.951 10:21:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:11.211 10:21:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:11.211 10:21:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:11.211 10:21:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2297747 ']' 00:13:11.211 10:21:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2297747 00:13:11.211 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 2297747 ']' 00:13:11.211 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 2297747 00:13:11.211 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:13:11.211 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:11.211 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2297747 00:13:11.211 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:11.211 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:11.211 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2297747' 00:13:11.211 killing process with pid 2297747 00:13:11.211 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 2297747 00:13:11.211 10:21:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 2297747 00:13:11.211 10:21:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:11.211 10:21:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:11.211 10:21:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:11.211 10:21:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:11.211 10:21:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:11.211 10:21:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.211 10:21:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:11.211 10:21:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.747 10:21:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:13.747 00:13:13.747 real 0m32.524s 00:13:13.747 user 1m38.658s 00:13:13.747 sys 0m6.100s 00:13:13.747 10:21:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:13.747 10:21:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.747 ************************************ 00:13:13.748 END TEST nvmf_rpc 00:13:13.748 ************************************ 00:13:13.748 10:21:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:13.748 10:21:58 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:13.748 10:21:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:13.748 10:21:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:13.748 10:21:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:13.748 ************************************ 00:13:13.748 START TEST nvmf_invalid 00:13:13.748 ************************************ 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:13.748 * Looking for test storage... 00:13:13.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:13.748 10:21:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:19.020 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:19.020 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:19.020 Found net devices under 0000:86:00.0: cvl_0_0 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:19.020 Found net devices under 0000:86:00.1: cvl_0_1 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:19.020 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:19.021 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:19.021 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:19.021 10:22:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:19.279 10:22:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:19.279 10:22:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:19.279 10:22:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:19.279 10:22:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:19.279 10:22:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:19.279 10:22:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:19.279 10:22:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:19.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:19.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:13:19.279 00:13:19.279 --- 10.0.0.2 ping statistics --- 00:13:19.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.279 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:13:19.279 10:22:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:19.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:19.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:13:19.279 00:13:19.279 --- 10.0.0.1 ping statistics --- 00:13:19.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.279 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:13:19.279 10:22:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:19.280 10:22:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:19.280 10:22:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:19.280 10:22:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:19.280 10:22:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:19.280 10:22:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:19.280 10:22:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:19.280 10:22:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:19.280 10:22:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:19.280 10:22:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:19.539 10:22:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:19.539 10:22:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:19.539 10:22:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:19.539 10:22:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2305344 00:13:19.539 10:22:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2305344 00:13:19.539 10:22:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:19.539 10:22:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 2305344 ']' 00:13:19.539 10:22:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.539 10:22:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:19.539 10:22:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.539 10:22:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:19.539 10:22:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:19.539 [2024-07-14 10:22:04.318324] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:13:19.539 [2024-07-14 10:22:04.318366] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.539 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.539 [2024-07-14 10:22:04.386064] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:19.539 [2024-07-14 10:22:04.427803] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:19.539 [2024-07-14 10:22:04.427838] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:19.539 [2024-07-14 10:22:04.427845] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:19.539 [2024-07-14 10:22:04.427851] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:19.539 [2024-07-14 10:22:04.427856] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:19.539 [2024-07-14 10:22:04.427909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.539 [2024-07-14 10:22:04.428017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:19.539 [2024-07-14 10:22:04.428126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.539 [2024-07-14 10:22:04.428127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:19.798 10:22:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:19.798 10:22:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:13:19.798 10:22:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:19.798 10:22:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:19.798 10:22:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:19.798 10:22:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:19.798 10:22:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:19.798 10:22:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode13752 00:13:19.798 [2024-07-14 10:22:04.724704] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:19.798 10:22:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:19.798 { 00:13:19.798 "nqn": "nqn.2016-06.io.spdk:cnode13752", 00:13:19.798 "tgt_name": "foobar", 00:13:19.798 "method": "nvmf_create_subsystem", 00:13:19.798 "req_id": 1 00:13:19.798 } 00:13:19.798 Got JSON-RPC error response 00:13:19.798 response: 00:13:19.798 { 00:13:19.798 "code": -32603, 00:13:19.798 "message": "Unable to find target foobar" 00:13:19.798 }' 00:13:19.798 10:22:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:19.798 { 00:13:19.798 "nqn": "nqn.2016-06.io.spdk:cnode13752", 00:13:19.798 "tgt_name": "foobar", 00:13:19.798 "method": "nvmf_create_subsystem", 00:13:19.798 "req_id": 1 00:13:19.798 } 00:13:19.798 Got JSON-RPC error response 00:13:19.798 response: 00:13:19.798 { 00:13:19.798 "code": -32603, 00:13:19.798 "message": "Unable to find target foobar" 00:13:19.798 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:19.798 10:22:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:19.798 10:22:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15528 00:13:20.057 [2024-07-14 10:22:04.917361] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15528: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:20.057 10:22:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:20.057 { 00:13:20.057 "nqn": "nqn.2016-06.io.spdk:cnode15528", 00:13:20.057 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:20.057 "method": "nvmf_create_subsystem", 00:13:20.057 "req_id": 1 00:13:20.057 } 00:13:20.057 Got JSON-RPC error response 00:13:20.057 response: 00:13:20.057 { 00:13:20.057 "code": -32602, 00:13:20.057 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:20.057 }' 00:13:20.057 10:22:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:20.057 { 00:13:20.057 "nqn": "nqn.2016-06.io.spdk:cnode15528", 00:13:20.057 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:20.057 "method": "nvmf_create_subsystem", 00:13:20.057 "req_id": 1 00:13:20.057 } 00:13:20.057 Got JSON-RPC error response 00:13:20.057 response: 00:13:20.057 { 00:13:20.057 "code": -32602, 00:13:20.057 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:20.057 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:20.057 10:22:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:20.057 10:22:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode28756 00:13:20.316 [2024-07-14 10:22:05.102006] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28756: invalid model number 'SPDK_Controller' 00:13:20.316 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:20.316 { 00:13:20.316 "nqn": "nqn.2016-06.io.spdk:cnode28756", 00:13:20.317 "model_number": "SPDK_Controller\u001f", 00:13:20.317 "method": "nvmf_create_subsystem", 00:13:20.317 "req_id": 1 00:13:20.317 } 00:13:20.317 Got JSON-RPC error response 00:13:20.317 response: 00:13:20.317 { 00:13:20.317 "code": -32602, 00:13:20.317 "message": "Invalid MN SPDK_Controller\u001f" 00:13:20.317 }' 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:20.317 { 00:13:20.317 "nqn": "nqn.2016-06.io.spdk:cnode28756", 00:13:20.317 "model_number": "SPDK_Controller\u001f", 00:13:20.317 "method": "nvmf_create_subsystem", 00:13:20.317 "req_id": 1 00:13:20.317 } 00:13:20.317 Got JSON-RPC error response 00:13:20.317 response: 00:13:20.317 { 00:13:20.317 "code": -32602, 00:13:20.317 "message": "Invalid MN SPDK_Controller\u001f" 00:13:20.317 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 0 == \- ]] 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '0cDOKXP1F>= vJ;-3V{!T' 00:13:20.317 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '0cDOKXP1F>= vJ;-3V{!T' nqn.2016-06.io.spdk:cnode8625 00:13:20.578 [2024-07-14 10:22:05.415078] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8625: invalid serial number '0cDOKXP1F>= vJ;-3V{!T' 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:20.578 { 00:13:20.578 "nqn": "nqn.2016-06.io.spdk:cnode8625", 00:13:20.578 "serial_number": "0cDOKXP1F>= vJ;-3V{!T", 00:13:20.578 "method": "nvmf_create_subsystem", 00:13:20.578 "req_id": 1 00:13:20.578 } 00:13:20.578 Got JSON-RPC error response 00:13:20.578 response: 00:13:20.578 { 00:13:20.578 "code": -32602, 00:13:20.578 "message": "Invalid SN 0cDOKXP1F>= vJ;-3V{!T" 00:13:20.578 }' 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:20.578 { 00:13:20.578 "nqn": "nqn.2016-06.io.spdk:cnode8625", 00:13:20.578 "serial_number": "0cDOKXP1F>= vJ;-3V{!T", 00:13:20.578 "method": "nvmf_create_subsystem", 00:13:20.578 "req_id": 1 00:13:20.578 } 00:13:20.578 Got JSON-RPC error response 00:13:20.578 response: 00:13:20.578 { 00:13:20.578 "code": -32602, 00:13:20.578 "message": "Invalid SN 0cDOKXP1F>= vJ;-3V{!T" 00:13:20.578 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.578 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:20.839 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ K == \- ]] 00:13:20.840 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'KlKD=db /ZX/n|@m`b_I6v=EB.oy.V}D2i;NBfI"`' 00:13:20.840 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'KlKD=db /ZX/n|@m`b_I6v=EB.oy.V}D2i;NBfI"`' nqn.2016-06.io.spdk:cnode885 00:13:21.099 [2024-07-14 10:22:05.860703] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode885: invalid model number 'KlKD=db /ZX/n|@m`b_I6v=EB.oy.V}D2i;NBfI"`' 00:13:21.099 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:21.099 { 00:13:21.099 "nqn": "nqn.2016-06.io.spdk:cnode885", 00:13:21.099 "model_number": "KlKD=db /ZX/n|@m`b_I6v=EB.oy.V}D2i;NBfI\"`", 00:13:21.099 "method": "nvmf_create_subsystem", 00:13:21.099 "req_id": 1 00:13:21.099 } 00:13:21.099 Got JSON-RPC error response 00:13:21.099 response: 00:13:21.099 { 00:13:21.099 "code": -32602, 00:13:21.099 "message": "Invalid MN KlKD=db /ZX/n|@m`b_I6v=EB.oy.V}D2i;NBfI\"`" 00:13:21.099 }' 00:13:21.099 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:21.099 { 00:13:21.099 "nqn": "nqn.2016-06.io.spdk:cnode885", 00:13:21.099 "model_number": "KlKD=db /ZX/n|@m`b_I6v=EB.oy.V}D2i;NBfI\"`", 00:13:21.099 "method": "nvmf_create_subsystem", 00:13:21.099 "req_id": 1 00:13:21.099 } 00:13:21.099 Got JSON-RPC error response 00:13:21.099 response: 00:13:21.099 { 00:13:21.099 "code": -32602, 00:13:21.099 "message": "Invalid MN KlKD=db /ZX/n|@m`b_I6v=EB.oy.V}D2i;NBfI\"`" 00:13:21.099 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:21.099 10:22:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:21.099 [2024-07-14 10:22:06.053384] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:21.099 10:22:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:21.359 10:22:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:21.359 10:22:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:21.359 10:22:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:21.359 10:22:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:21.359 10:22:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:21.619 [2024-07-14 10:22:06.422614] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:21.619 10:22:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:21.619 { 00:13:21.619 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:21.619 "listen_address": { 00:13:21.619 "trtype": "tcp", 00:13:21.619 "traddr": "", 00:13:21.619 "trsvcid": "4421" 00:13:21.619 }, 00:13:21.619 "method": "nvmf_subsystem_remove_listener", 00:13:21.619 "req_id": 1 00:13:21.619 } 00:13:21.619 Got JSON-RPC error response 00:13:21.619 response: 00:13:21.619 { 00:13:21.619 "code": -32602, 00:13:21.619 "message": "Invalid parameters" 00:13:21.619 }' 00:13:21.619 10:22:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:21.619 { 00:13:21.619 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:21.619 "listen_address": { 00:13:21.619 "trtype": "tcp", 00:13:21.619 "traddr": "", 00:13:21.619 "trsvcid": "4421" 00:13:21.619 }, 00:13:21.619 "method": "nvmf_subsystem_remove_listener", 00:13:21.619 "req_id": 1 00:13:21.619 } 00:13:21.619 Got JSON-RPC error response 00:13:21.619 response: 00:13:21.619 { 00:13:21.619 "code": -32602, 00:13:21.619 "message": "Invalid parameters" 00:13:21.619 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:21.619 10:22:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27422 -i 0 00:13:21.619 [2024-07-14 10:22:06.595146] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27422: invalid cntlid range [0-65519] 00:13:21.879 10:22:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:21.879 { 00:13:21.879 "nqn": "nqn.2016-06.io.spdk:cnode27422", 00:13:21.879 "min_cntlid": 0, 00:13:21.879 "method": "nvmf_create_subsystem", 00:13:21.879 "req_id": 1 00:13:21.879 } 00:13:21.879 Got JSON-RPC error response 00:13:21.879 response: 00:13:21.879 { 00:13:21.879 "code": -32602, 00:13:21.879 "message": "Invalid cntlid range [0-65519]" 00:13:21.879 }' 00:13:21.879 10:22:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:21.879 { 00:13:21.879 "nqn": "nqn.2016-06.io.spdk:cnode27422", 00:13:21.879 "min_cntlid": 0, 00:13:21.879 "method": "nvmf_create_subsystem", 00:13:21.879 "req_id": 1 00:13:21.879 } 00:13:21.879 Got JSON-RPC error response 00:13:21.879 response: 00:13:21.879 { 00:13:21.879 "code": -32602, 00:13:21.879 "message": "Invalid cntlid range [0-65519]" 00:13:21.879 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:21.879 10:22:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7382 -i 65520 00:13:21.879 [2024-07-14 10:22:06.775755] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7382: invalid cntlid range [65520-65519] 00:13:21.879 10:22:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:21.879 { 00:13:21.879 "nqn": "nqn.2016-06.io.spdk:cnode7382", 00:13:21.879 "min_cntlid": 65520, 00:13:21.879 "method": "nvmf_create_subsystem", 00:13:21.879 "req_id": 1 00:13:21.879 } 00:13:21.879 Got JSON-RPC error response 00:13:21.879 response: 00:13:21.879 { 00:13:21.879 "code": -32602, 00:13:21.879 "message": "Invalid cntlid range [65520-65519]" 00:13:21.879 }' 00:13:21.879 10:22:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:21.879 { 00:13:21.879 "nqn": "nqn.2016-06.io.spdk:cnode7382", 00:13:21.879 "min_cntlid": 65520, 00:13:21.879 "method": "nvmf_create_subsystem", 00:13:21.879 "req_id": 1 00:13:21.879 } 00:13:21.879 Got JSON-RPC error response 00:13:21.879 response: 00:13:21.879 { 00:13:21.879 "code": -32602, 00:13:21.879 "message": "Invalid cntlid range [65520-65519]" 00:13:21.879 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:21.879 10:22:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10882 -I 0 00:13:22.138 [2024-07-14 10:22:06.960408] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10882: invalid cntlid range [1-0] 00:13:22.138 10:22:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:22.138 { 00:13:22.138 "nqn": "nqn.2016-06.io.spdk:cnode10882", 00:13:22.138 "max_cntlid": 0, 00:13:22.138 "method": "nvmf_create_subsystem", 00:13:22.138 "req_id": 1 00:13:22.138 } 00:13:22.138 Got JSON-RPC error response 00:13:22.138 response: 00:13:22.138 { 00:13:22.138 "code": -32602, 00:13:22.138 "message": "Invalid cntlid range [1-0]" 00:13:22.138 }' 00:13:22.138 10:22:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:22.138 { 00:13:22.138 "nqn": "nqn.2016-06.io.spdk:cnode10882", 00:13:22.138 "max_cntlid": 0, 00:13:22.138 "method": "nvmf_create_subsystem", 00:13:22.138 "req_id": 1 00:13:22.139 } 00:13:22.139 Got JSON-RPC error response 00:13:22.139 response: 00:13:22.139 { 00:13:22.139 "code": -32602, 00:13:22.139 "message": "Invalid cntlid range [1-0]" 00:13:22.139 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:22.139 10:22:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13361 -I 65520 00:13:22.398 [2024-07-14 10:22:07.136976] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13361: invalid cntlid range [1-65520] 00:13:22.398 10:22:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:22.398 { 00:13:22.398 "nqn": "nqn.2016-06.io.spdk:cnode13361", 00:13:22.398 "max_cntlid": 65520, 00:13:22.398 "method": "nvmf_create_subsystem", 00:13:22.398 "req_id": 1 00:13:22.398 } 00:13:22.398 Got JSON-RPC error response 00:13:22.398 response: 00:13:22.398 { 00:13:22.398 "code": -32602, 00:13:22.398 "message": "Invalid cntlid range [1-65520]" 00:13:22.398 }' 00:13:22.398 10:22:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:22.398 { 00:13:22.398 "nqn": "nqn.2016-06.io.spdk:cnode13361", 00:13:22.398 "max_cntlid": 65520, 00:13:22.398 "method": "nvmf_create_subsystem", 00:13:22.398 "req_id": 1 00:13:22.398 } 00:13:22.398 Got JSON-RPC error response 00:13:22.398 response: 00:13:22.398 { 00:13:22.398 "code": -32602, 00:13:22.398 "message": "Invalid cntlid range [1-65520]" 00:13:22.398 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:22.398 10:22:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29826 -i 6 -I 5 00:13:22.398 [2024-07-14 10:22:07.325638] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29826: invalid cntlid range [6-5] 00:13:22.398 10:22:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:22.398 { 00:13:22.398 "nqn": "nqn.2016-06.io.spdk:cnode29826", 00:13:22.398 "min_cntlid": 6, 00:13:22.398 "max_cntlid": 5, 00:13:22.398 "method": "nvmf_create_subsystem", 00:13:22.398 "req_id": 1 00:13:22.398 } 00:13:22.398 Got JSON-RPC error response 00:13:22.398 response: 00:13:22.398 { 00:13:22.398 "code": -32602, 00:13:22.398 "message": "Invalid cntlid range [6-5]" 00:13:22.398 }' 00:13:22.398 10:22:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:22.398 { 00:13:22.398 "nqn": "nqn.2016-06.io.spdk:cnode29826", 00:13:22.398 "min_cntlid": 6, 00:13:22.398 "max_cntlid": 5, 00:13:22.398 "method": "nvmf_create_subsystem", 00:13:22.398 "req_id": 1 00:13:22.398 } 00:13:22.398 Got JSON-RPC error response 00:13:22.398 response: 00:13:22.398 { 00:13:22.398 "code": -32602, 00:13:22.398 "message": "Invalid cntlid range [6-5]" 00:13:22.398 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:22.398 10:22:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:22.658 10:22:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:22.658 { 00:13:22.658 "name": "foobar", 00:13:22.658 "method": "nvmf_delete_target", 00:13:22.658 "req_id": 1 00:13:22.658 } 00:13:22.658 Got JSON-RPC error response 00:13:22.658 response: 00:13:22.658 { 00:13:22.658 "code": -32602, 00:13:22.658 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:22.658 }' 00:13:22.658 10:22:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:22.658 { 00:13:22.658 "name": "foobar", 00:13:22.658 "method": "nvmf_delete_target", 00:13:22.658 "req_id": 1 00:13:22.658 } 00:13:22.658 Got JSON-RPC error response 00:13:22.658 response: 00:13:22.658 { 00:13:22.658 "code": -32602, 00:13:22.658 "message": "The specified target doesn't exist, cannot delete it." 00:13:22.658 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:22.659 10:22:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:22.659 10:22:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:22.659 10:22:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:22.659 10:22:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:22.659 10:22:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:22.659 10:22:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:22.659 10:22:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:22.659 10:22:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:22.659 rmmod nvme_tcp 00:13:22.659 rmmod nvme_fabrics 00:13:22.659 rmmod nvme_keyring 00:13:22.659 10:22:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:22.659 10:22:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:22.659 10:22:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:22.659 10:22:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2305344 ']' 00:13:22.659 10:22:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2305344 00:13:22.659 10:22:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 2305344 ']' 00:13:22.659 10:22:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 2305344 00:13:22.659 10:22:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:13:22.659 10:22:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:22.659 10:22:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2305344 00:13:22.659 10:22:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:22.659 10:22:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:22.659 10:22:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2305344' 00:13:22.659 killing process with pid 2305344 00:13:22.659 10:22:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 2305344 00:13:22.659 10:22:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 2305344 00:13:22.918 10:22:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:22.918 10:22:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:22.918 10:22:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:22.918 10:22:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:22.918 10:22:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:22.918 10:22:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.918 10:22:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:22.918 10:22:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.456 10:22:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:25.456 00:13:25.456 real 0m11.511s 00:13:25.456 user 0m17.110s 00:13:25.456 sys 0m5.204s 00:13:25.456 10:22:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:25.456 10:22:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:25.456 ************************************ 00:13:25.456 END TEST nvmf_invalid 00:13:25.456 ************************************ 00:13:25.456 10:22:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:25.456 10:22:09 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:25.456 10:22:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:25.456 10:22:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:25.456 10:22:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:25.456 ************************************ 00:13:25.456 START TEST nvmf_abort 00:13:25.456 ************************************ 00:13:25.456 10:22:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:25.456 * Looking for test storage... 00:13:25.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:25.456 10:22:09 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:25.456 10:22:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:25.456 10:22:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.456 10:22:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.456 10:22:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.456 10:22:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.456 10:22:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.456 10:22:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.456 10:22:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.456 10:22:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.456 10:22:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.456 10:22:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.456 10:22:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:25.456 10:22:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:25.456 10:22:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.456 10:22:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.456 10:22:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:25.456 10:22:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:25.456 10:22:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:25.456 10:22:10 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.456 10:22:10 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.456 10:22:10 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.456 10:22:10 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.456 10:22:10 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.457 10:22:10 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.457 10:22:10 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:25.457 10:22:10 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.457 10:22:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:25.457 10:22:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:25.457 10:22:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:25.457 10:22:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:25.457 10:22:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.457 10:22:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.457 10:22:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:25.457 10:22:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:25.457 10:22:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:25.457 10:22:10 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:25.457 10:22:10 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:25.457 10:22:10 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:25.457 10:22:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:25.457 10:22:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:25.457 10:22:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:25.457 10:22:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:25.457 10:22:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:25.457 10:22:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.457 10:22:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.457 10:22:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.457 10:22:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:25.457 10:22:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:25.457 10:22:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:25.457 10:22:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:30.731 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:30.731 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:30.731 Found net devices under 0000:86:00.0: cvl_0_0 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:30.731 Found net devices under 0000:86:00.1: cvl_0_1 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:30.731 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:30.990 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:30.990 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:30.990 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:30.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:13:30.990 00:13:30.990 --- 10.0.0.2 ping statistics --- 00:13:30.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.990 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:13:30.990 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:30.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:13:30.990 00:13:30.990 --- 10.0.0.1 ping statistics --- 00:13:30.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.990 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:13:30.990 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.990 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:30.990 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:30.990 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.990 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:30.990 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:30.990 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.990 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:30.990 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:30.990 10:22:15 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:30.990 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:30.990 10:22:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:30.990 10:22:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:30.990 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2309543 00:13:30.990 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2309543 00:13:30.990 10:22:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:30.990 10:22:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 2309543 ']' 00:13:30.990 10:22:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.990 10:22:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:30.990 10:22:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.990 10:22:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:30.990 10:22:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:30.990 [2024-07-14 10:22:15.869412] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:13:30.990 [2024-07-14 10:22:15.869460] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.990 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.990 [2024-07-14 10:22:15.942735] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:31.249 [2024-07-14 10:22:15.984009] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.249 [2024-07-14 10:22:15.984045] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.249 [2024-07-14 10:22:15.984053] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.249 [2024-07-14 10:22:15.984060] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.249 [2024-07-14 10:22:15.984065] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.249 [2024-07-14 10:22:15.984124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.249 [2024-07-14 10:22:15.984154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.249 [2024-07-14 10:22:15.984155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:31.816 10:22:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:31.816 10:22:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:13:31.816 10:22:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:31.816 10:22:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:31.816 10:22:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:31.816 10:22:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:31.816 10:22:16 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:31.816 10:22:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.816 10:22:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:31.816 [2024-07-14 10:22:16.723901] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:31.816 10:22:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.816 10:22:16 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:31.816 10:22:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.816 10:22:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:31.816 Malloc0 00:13:31.816 10:22:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.816 10:22:16 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:31.816 10:22:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.816 10:22:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:31.816 Delay0 00:13:31.816 10:22:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.816 10:22:16 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:31.816 10:22:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.816 10:22:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:31.816 10:22:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.816 10:22:16 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:31.816 10:22:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.816 10:22:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:31.816 10:22:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.816 10:22:16 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:31.816 10:22:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.816 10:22:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:31.816 [2024-07-14 10:22:16.794426] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.075 10:22:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.075 10:22:16 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:32.075 10:22:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.075 10:22:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:32.075 10:22:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.075 10:22:16 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:32.075 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.075 [2024-07-14 10:22:16.915529] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:34.051 Initializing NVMe Controllers 00:13:34.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:34.051 controller IO queue size 128 less than required 00:13:34.051 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:34.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:34.051 Initialization complete. Launching workers. 00:13:34.051 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 43095 00:13:34.051 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 43156, failed to submit 62 00:13:34.051 success 43099, unsuccess 57, failed 0 00:13:34.051 10:22:18 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:34.051 10:22:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.051 10:22:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:34.051 10:22:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.051 10:22:19 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:34.051 10:22:19 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:34.051 10:22:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:34.051 10:22:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:34.051 10:22:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:34.051 10:22:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:34.051 10:22:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:34.051 10:22:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:34.051 rmmod nvme_tcp 00:13:34.051 rmmod nvme_fabrics 00:13:34.310 rmmod nvme_keyring 00:13:34.310 10:22:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:34.310 10:22:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:34.310 10:22:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:34.310 10:22:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2309543 ']' 00:13:34.310 10:22:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2309543 00:13:34.310 10:22:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 2309543 ']' 00:13:34.310 10:22:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 2309543 00:13:34.310 10:22:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:13:34.310 10:22:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:34.310 10:22:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2309543 00:13:34.310 10:22:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:34.310 10:22:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:34.310 10:22:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2309543' 00:13:34.310 killing process with pid 2309543 00:13:34.310 10:22:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 2309543 00:13:34.310 10:22:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 2309543 00:13:34.569 10:22:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:34.569 10:22:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:34.569 10:22:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:34.569 10:22:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:34.569 10:22:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:34.569 10:22:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.569 10:22:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.569 10:22:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.471 10:22:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:36.471 00:13:36.471 real 0m11.470s 00:13:36.471 user 0m13.171s 00:13:36.471 sys 0m5.366s 00:13:36.471 10:22:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:36.471 10:22:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:36.471 ************************************ 00:13:36.471 END TEST nvmf_abort 00:13:36.471 ************************************ 00:13:36.471 10:22:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:36.471 10:22:21 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:36.471 10:22:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:36.471 10:22:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:36.471 10:22:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:36.471 ************************************ 00:13:36.471 START TEST nvmf_ns_hotplug_stress 00:13:36.471 ************************************ 00:13:36.471 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:36.729 * Looking for test storage... 00:13:36.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:36.729 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:36.730 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.730 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.730 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.730 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:36.730 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:36.730 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:36.730 10:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:43.301 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:43.301 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:43.302 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:43.302 Found net devices under 0000:86:00.0: cvl_0_0 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:43.302 Found net devices under 0000:86:00.1: cvl_0_1 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:43.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:13:43.302 00:13:43.302 --- 10.0.0.2 ping statistics --- 00:13:43.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.302 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:43.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:13:43.302 00:13:43.302 --- 10.0.0.1 ping statistics --- 00:13:43.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.302 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2313683 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2313683 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 2313683 ']' 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:43.302 10:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.302 [2024-07-14 10:22:27.413611] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:13:43.302 [2024-07-14 10:22:27.413654] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.302 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.302 [2024-07-14 10:22:27.484462] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:43.302 [2024-07-14 10:22:27.523547] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.302 [2024-07-14 10:22:27.523588] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.302 [2024-07-14 10:22:27.523595] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.302 [2024-07-14 10:22:27.523601] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.302 [2024-07-14 10:22:27.523606] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.302 [2024-07-14 10:22:27.523719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.302 [2024-07-14 10:22:27.523839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.302 [2024-07-14 10:22:27.523840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:43.302 10:22:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:43.302 10:22:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:13:43.302 10:22:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:43.302 10:22:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:43.302 10:22:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.302 10:22:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.302 10:22:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:43.302 10:22:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:43.561 [2024-07-14 10:22:28.407751] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:43.561 10:22:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:43.821 10:22:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:43.821 [2024-07-14 10:22:28.793161] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.080 10:22:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:44.080 10:22:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:44.339 Malloc0 00:13:44.339 10:22:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:44.598 Delay0 00:13:44.598 10:22:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.598 10:22:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:44.858 NULL1 00:13:44.858 10:22:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:45.117 10:22:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2314032 00:13:45.117 10:22:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:45.117 10:22:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:13:45.117 10:22:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.117 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.375 10:22:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.375 10:22:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:45.375 10:22:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:45.634 true 00:13:45.634 10:22:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:13:45.634 10:22:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.893 10:22:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.152 10:22:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:46.152 10:22:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:46.152 true 00:13:46.152 10:22:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:13:46.152 10:22:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.412 10:22:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.671 10:22:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:46.671 10:22:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:46.930 true 00:13:46.930 10:22:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:13:46.930 10:22:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.930 10:22:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.189 10:22:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:47.189 10:22:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:47.449 true 00:13:47.449 10:22:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:13:47.449 10:22:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.708 10:22:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.708 10:22:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:47.708 10:22:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:47.967 true 00:13:47.967 10:22:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:13:47.967 10:22:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.226 10:22:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.484 10:22:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:48.484 10:22:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:48.484 true 00:13:48.484 10:22:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:13:48.484 10:22:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.741 10:22:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.998 10:22:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:48.998 10:22:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:48.998 true 00:13:49.256 10:22:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:13:49.256 10:22:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.256 10:22:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.514 10:22:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:49.514 10:22:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:49.772 true 00:13:49.772 10:22:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:13:49.772 10:22:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.772 10:22:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.030 10:22:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:50.030 10:22:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:50.288 true 00:13:50.288 10:22:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:13:50.288 10:22:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.546 10:22:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.546 10:22:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:50.546 10:22:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:50.803 true 00:13:50.804 10:22:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:13:50.804 10:22:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.061 10:22:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.318 10:22:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:51.318 10:22:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:51.318 true 00:13:51.318 10:22:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:13:51.318 10:22:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.576 10:22:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.833 10:22:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:51.833 10:22:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:52.090 true 00:13:52.090 10:22:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:13:52.090 10:22:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.090 10:22:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.347 10:22:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:52.347 10:22:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:52.605 true 00:13:52.605 10:22:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:13:52.605 10:22:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.863 10:22:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.863 10:22:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:52.863 10:22:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:53.121 true 00:13:53.121 10:22:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:13:53.121 10:22:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.380 10:22:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.639 10:22:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:53.639 10:22:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:53.639 true 00:13:53.639 10:22:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:13:53.639 10:22:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.932 10:22:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.197 10:22:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:54.197 10:22:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:54.197 true 00:13:54.455 10:22:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:13:54.455 10:22:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.455 10:22:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.713 10:22:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:54.713 10:22:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:54.971 true 00:13:54.971 10:22:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:13:54.971 10:22:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.971 10:22:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.230 10:22:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:55.230 10:22:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:55.488 true 00:13:55.488 10:22:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:13:55.488 10:22:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.747 10:22:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.747 10:22:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:55.747 10:22:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:56.006 true 00:13:56.006 10:22:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:13:56.006 10:22:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.265 10:22:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.523 10:22:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:56.523 10:22:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:56.523 true 00:13:56.781 10:22:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:13:56.781 10:22:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.781 10:22:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.039 10:22:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:57.039 10:22:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:57.298 true 00:13:57.298 10:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:13:57.298 10:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.298 10:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.557 10:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:57.557 10:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:57.815 true 00:13:57.815 10:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:13:57.815 10:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.074 10:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.332 10:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:58.332 10:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:58.333 true 00:13:58.333 10:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:13:58.333 10:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.590 10:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.849 10:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:58.849 10:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:58.849 true 00:13:58.849 10:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:13:58.849 10:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.108 10:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.365 10:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:59.365 10:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:59.624 true 00:13:59.624 10:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:13:59.624 10:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.882 10:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.882 10:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:59.882 10:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:00.141 true 00:14:00.141 10:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:14:00.141 10:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.400 10:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.659 10:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:00.659 10:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:00.659 true 00:14:00.659 10:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:14:00.659 10:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.918 10:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.177 10:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:01.177 10:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:01.437 true 00:14:01.437 10:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:14:01.437 10:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.697 10:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.697 10:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:14:01.697 10:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:01.956 true 00:14:01.956 10:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:14:01.956 10:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.215 10:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.215 10:22:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:14:02.215 10:22:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:14:02.473 true 00:14:02.473 10:22:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:14:02.473 10:22:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.732 10:22:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.992 10:22:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:14:02.992 10:22:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:14:02.992 true 00:14:03.251 10:22:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:14:03.251 10:22:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.251 10:22:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.510 10:22:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:14:03.510 10:22:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:14:03.770 true 00:14:03.770 10:22:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:14:03.770 10:22:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.029 10:22:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.029 10:22:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:14:04.029 10:22:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:14:04.287 true 00:14:04.287 10:22:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:14:04.287 10:22:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.546 10:22:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.804 10:22:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:14:04.804 10:22:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:14:04.804 true 00:14:04.804 10:22:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:14:04.804 10:22:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.063 10:22:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.323 10:22:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:14:05.323 10:22:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:14:05.583 true 00:14:05.583 10:22:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:14:05.583 10:22:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.842 10:22:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.842 10:22:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:14:05.842 10:22:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:14:06.100 true 00:14:06.100 10:22:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:14:06.100 10:22:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.359 10:22:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.617 10:22:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:14:06.617 10:22:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:14:06.617 true 00:14:06.617 10:22:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:14:06.617 10:22:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.876 10:22:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.135 10:22:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:14:07.135 10:22:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:14:07.394 true 00:14:07.394 10:22:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:14:07.394 10:22:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.394 10:22:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.654 10:22:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:14:07.654 10:22:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:14:07.912 true 00:14:07.912 10:22:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:14:07.912 10:22:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.171 10:22:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.171 10:22:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:14:08.171 10:22:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:14:08.430 true 00:14:08.430 10:22:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:14:08.430 10:22:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.688 10:22:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.946 10:22:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:14:08.946 10:22:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:14:08.946 true 00:14:08.946 10:22:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:14:08.946 10:22:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.204 10:22:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.463 10:22:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:14:09.463 10:22:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:14:09.722 true 00:14:09.722 10:22:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:14:09.722 10:22:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.051 10:22:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.051 10:22:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:14:10.051 10:22:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:14:10.310 true 00:14:10.310 10:22:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:14:10.310 10:22:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.310 10:22:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.569 10:22:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:14:10.569 10:22:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:14:10.827 true 00:14:10.827 10:22:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:14:10.827 10:22:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.086 10:22:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.346 10:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:14:11.346 10:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:14:11.346 true 00:14:11.346 10:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:14:11.346 10:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.605 10:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.864 10:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:14:11.864 10:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:14:12.123 true 00:14:12.123 10:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:14:12.123 10:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.123 10:22:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.382 10:22:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:14:12.382 10:22:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:14:12.641 true 00:14:12.641 10:22:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:14:12.641 10:22:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.900 10:22:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.900 10:22:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:14:12.900 10:22:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:14:13.159 true 00:14:13.159 10:22:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:14:13.159 10:22:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.417 10:22:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.676 10:22:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:14:13.676 10:22:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:14:13.676 true 00:14:13.676 10:22:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:14:13.676 10:22:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.935 10:22:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.194 10:22:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:14:14.194 10:22:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:14:14.452 true 00:14:14.453 10:22:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:14:14.453 10:22:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.712 10:22:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.712 10:22:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:14:14.712 10:22:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:14:14.970 true 00:14:14.970 10:22:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:14:14.970 10:22:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.229 10:23:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.488 10:23:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:14:15.488 10:23:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:14:15.488 Initializing NVMe Controllers 00:14:15.488 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:15.488 Controller IO queue size 128, less than required. 00:14:15.488 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:15.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:15.488 Initialization complete. Launching workers. 00:14:15.488 ======================================================== 00:14:15.488 Latency(us) 00:14:15.488 Device Information : IOPS MiB/s Average min max 00:14:15.488 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27242.40 13.30 4698.40 2552.15 9112.60 00:14:15.488 ======================================================== 00:14:15.488 Total : 27242.40 13.30 4698.40 2552.15 9112.60 00:14:15.488 00:14:15.488 true 00:14:15.488 10:23:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2314032 00:14:15.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2314032) - No such process 00:14:15.488 10:23:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2314032 00:14:15.488 10:23:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.746 10:23:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:16.005 10:23:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:16.005 10:23:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:16.005 10:23:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:16.005 10:23:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:16.005 10:23:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:16.005 null0 00:14:16.005 10:23:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:16.005 10:23:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:16.005 10:23:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:16.265 null1 00:14:16.265 10:23:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:16.265 10:23:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:16.265 10:23:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:16.524 null2 00:14:16.524 10:23:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:16.524 10:23:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:16.524 10:23:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:16.524 null3 00:14:16.782 10:23:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:16.782 10:23:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:16.782 10:23:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:16.782 null4 00:14:16.782 10:23:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:16.782 10:23:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:16.782 10:23:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:17.041 null5 00:14:17.041 10:23:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:17.041 10:23:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:17.041 10:23:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:17.300 null6 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:17.300 null7 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2319636 2319637 2319639 2319641 2319643 2319645 2319647 2319649 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.300 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:17.559 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:17.559 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.559 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:17.559 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:17.559 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:17.559 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:17.559 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:17.559 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:17.818 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:17.818 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.818 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:17.818 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:17.818 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.818 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:17.818 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:17.818 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:17.818 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.818 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.818 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:17.818 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:17.818 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:17.818 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.818 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:17.818 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:17.818 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.818 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:17.818 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:17.818 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.818 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:17.818 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:17.818 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.818 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:17.818 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:17.818 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.818 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:17.818 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:18.078 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:18.078 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:18.078 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:18.078 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:18.078 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.078 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.078 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:18.078 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.078 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.078 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:18.078 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.078 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.078 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:18.078 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.078 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.078 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:18.078 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.078 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.078 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:18.078 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.078 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.078 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:18.078 10:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.078 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.078 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:18.078 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.078 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.078 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:18.341 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:18.341 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.341 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:18.341 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:18.341 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:18.341 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:18.341 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:18.341 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:18.601 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:18.860 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.860 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.860 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:18.860 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.860 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.860 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:18.860 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.860 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.860 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.860 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.860 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:18.860 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:18.860 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.860 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.860 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:18.860 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.860 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.860 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.860 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.860 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:18.860 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:18.860 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.860 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.860 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:19.119 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:19.119 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:19.119 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:19.119 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.119 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:19.119 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:19.119 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:19.119 10:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:19.119 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.119 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.119 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:19.378 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.378 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.378 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:19.378 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.378 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.378 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:19.378 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.378 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.378 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:19.378 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.379 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.379 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:19.379 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.379 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.379 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:19.379 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.379 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.379 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:19.379 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.379 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.379 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:19.379 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:19.379 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:19.379 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:19.379 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:19.379 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:19.379 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.379 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:19.379 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:19.637 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.637 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.637 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:19.638 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.638 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.638 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:19.638 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.638 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.638 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:19.638 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.638 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.638 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:19.638 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.638 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.638 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:19.638 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.638 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.638 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:19.638 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.638 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.638 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:19.638 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.638 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.638 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:19.896 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:19.896 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:19.896 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:19.896 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:19.896 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:19.896 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.896 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:19.896 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:19.896 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.896 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.896 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:19.896 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.896 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.896 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:19.896 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.896 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.896 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:19.896 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.896 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.896 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:19.896 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.896 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.896 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:19.896 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.896 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.896 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:20.155 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.155 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.155 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:20.155 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.155 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.155 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:20.155 10:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:20.155 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:20.155 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:20.155 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:20.155 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:20.155 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.155 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:20.155 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:20.414 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.414 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.414 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:20.414 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.414 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.414 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:20.414 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.414 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.414 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:20.414 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.414 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.414 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:20.414 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.414 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.414 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:20.414 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.414 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.414 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:20.414 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.414 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.414 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:20.414 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.414 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.414 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:20.414 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.672 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:20.931 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:20.931 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:20.931 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:20.931 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:20.931 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:20.931 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.931 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:20.931 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:20.931 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.931 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.189 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.189 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.189 10:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:21.189 rmmod nvme_tcp 00:14:21.189 rmmod nvme_fabrics 00:14:21.189 rmmod nvme_keyring 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2313683 ']' 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2313683 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 2313683 ']' 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 2313683 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2313683 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2313683' 00:14:21.189 killing process with pid 2313683 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 2313683 00:14:21.189 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 2313683 00:14:21.448 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:21.448 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:21.448 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:21.448 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:21.448 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:21.448 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.448 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.448 10:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.014 10:23:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:24.014 00:14:24.014 real 0m46.941s 00:14:24.014 user 3m17.856s 00:14:24.014 sys 0m16.770s 00:14:24.014 10:23:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:24.014 10:23:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.014 ************************************ 00:14:24.014 END TEST nvmf_ns_hotplug_stress 00:14:24.014 ************************************ 00:14:24.014 10:23:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:24.014 10:23:08 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:24.014 10:23:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:24.014 10:23:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:24.014 10:23:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:24.014 ************************************ 00:14:24.014 START TEST nvmf_connect_stress 00:14:24.014 ************************************ 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:24.014 * Looking for test storage... 00:14:24.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:24.014 10:23:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:29.282 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:29.282 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:29.282 Found net devices under 0000:86:00.0: cvl_0_0 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:29.282 Found net devices under 0000:86:00.1: cvl_0_1 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:29.282 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:29.541 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:29.541 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:29.541 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:29.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:29.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:14:29.541 00:14:29.541 --- 10.0.0.2 ping statistics --- 00:14:29.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.541 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:14:29.541 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:29.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:29.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:14:29.541 00:14:29.541 --- 10.0.0.1 ping statistics --- 00:14:29.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.541 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:14:29.541 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:29.541 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:29.541 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:29.541 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:29.541 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:29.541 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:29.541 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:29.541 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:29.542 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:29.542 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:29.542 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:29.542 10:23:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:29.542 10:23:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.542 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2323804 00:14:29.542 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2323804 00:14:29.542 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:29.542 10:23:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 2323804 ']' 00:14:29.542 10:23:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.542 10:23:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:29.542 10:23:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.542 10:23:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:29.542 10:23:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.542 [2024-07-14 10:23:14.387322] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:29.542 [2024-07-14 10:23:14.387368] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.542 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.542 [2024-07-14 10:23:14.459745] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:29.542 [2024-07-14 10:23:14.500801] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.542 [2024-07-14 10:23:14.500839] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.542 [2024-07-14 10:23:14.500846] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:29.542 [2024-07-14 10:23:14.500853] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:29.542 [2024-07-14 10:23:14.500858] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.542 [2024-07-14 10:23:14.500982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:29.542 [2024-07-14 10:23:14.501069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:29.542 [2024-07-14 10:23:14.501070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.801 [2024-07-14 10:23:14.630679] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.801 [2024-07-14 10:23:14.661357] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.801 NULL1 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2324017 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:29.801 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.801 10:23:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.369 10:23:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.369 10:23:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:30.369 10:23:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.369 10:23:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.369 10:23:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.627 10:23:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.627 10:23:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:30.627 10:23:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.627 10:23:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.627 10:23:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.885 10:23:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.885 10:23:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:30.885 10:23:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.885 10:23:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.885 10:23:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.144 10:23:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.144 10:23:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:31.144 10:23:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.144 10:23:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.144 10:23:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.403 10:23:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.403 10:23:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:31.403 10:23:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.403 10:23:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.403 10:23:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.969 10:23:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.969 10:23:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:31.969 10:23:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.969 10:23:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.969 10:23:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.227 10:23:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.227 10:23:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:32.227 10:23:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.227 10:23:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.227 10:23:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.486 10:23:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.486 10:23:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:32.486 10:23:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.486 10:23:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.486 10:23:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.744 10:23:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.744 10:23:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:32.744 10:23:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.744 10:23:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.744 10:23:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.002 10:23:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.002 10:23:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:33.002 10:23:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.003 10:23:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.003 10:23:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.569 10:23:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.569 10:23:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:33.569 10:23:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.569 10:23:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.569 10:23:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.827 10:23:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.827 10:23:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:33.827 10:23:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.827 10:23:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.828 10:23:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.086 10:23:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.086 10:23:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:34.086 10:23:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.086 10:23:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.086 10:23:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.345 10:23:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.345 10:23:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:34.345 10:23:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.345 10:23:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.345 10:23:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.604 10:23:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.604 10:23:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:34.604 10:23:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.604 10:23:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.604 10:23:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.170 10:23:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.170 10:23:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:35.170 10:23:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.170 10:23:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.170 10:23:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.428 10:23:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.428 10:23:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:35.428 10:23:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.428 10:23:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.428 10:23:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.686 10:23:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.686 10:23:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:35.686 10:23:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.686 10:23:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.686 10:23:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.944 10:23:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.944 10:23:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:35.944 10:23:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.944 10:23:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.944 10:23:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.203 10:23:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.203 10:23:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:36.203 10:23:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.203 10:23:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.203 10:23:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.769 10:23:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.769 10:23:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:36.769 10:23:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.769 10:23:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.769 10:23:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.026 10:23:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.026 10:23:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:37.026 10:23:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.026 10:23:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.026 10:23:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.284 10:23:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.284 10:23:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:37.284 10:23:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.284 10:23:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.284 10:23:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.542 10:23:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.542 10:23:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:37.542 10:23:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.542 10:23:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.542 10:23:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.109 10:23:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.109 10:23:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:38.109 10:23:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.109 10:23:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.109 10:23:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.368 10:23:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.368 10:23:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:38.368 10:23:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.368 10:23:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.368 10:23:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.625 10:23:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.625 10:23:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:38.625 10:23:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.625 10:23:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.625 10:23:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.883 10:23:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.883 10:23:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:38.883 10:23:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.883 10:23:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.883 10:23:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.141 10:23:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.141 10:23:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:39.141 10:23:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.141 10:23:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.141 10:23:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.709 10:23:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.709 10:23:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:39.709 10:23:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.709 10:23:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.709 10:23:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.967 10:23:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.967 10:23:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:39.967 10:23:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.967 10:23:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.967 10:23:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.967 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:40.226 10:23:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.226 10:23:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2324017 00:14:40.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2324017) - No such process 00:14:40.226 10:23:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2324017 00:14:40.226 10:23:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:40.226 10:23:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:40.226 10:23:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:40.226 10:23:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:40.226 10:23:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:40.226 10:23:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:40.226 10:23:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:40.226 10:23:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:40.226 10:23:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:40.226 rmmod nvme_tcp 00:14:40.226 rmmod nvme_fabrics 00:14:40.226 rmmod nvme_keyring 00:14:40.226 10:23:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:40.226 10:23:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:40.226 10:23:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:40.226 10:23:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2323804 ']' 00:14:40.226 10:23:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2323804 00:14:40.226 10:23:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 2323804 ']' 00:14:40.226 10:23:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 2323804 00:14:40.226 10:23:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:14:40.226 10:23:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:40.226 10:23:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2323804 00:14:40.226 10:23:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:40.226 10:23:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:40.226 10:23:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2323804' 00:14:40.226 killing process with pid 2323804 00:14:40.226 10:23:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 2323804 00:14:40.226 10:23:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 2323804 00:14:40.485 10:23:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:40.485 10:23:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:40.485 10:23:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:40.485 10:23:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:40.485 10:23:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:40.485 10:23:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.485 10:23:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:40.485 10:23:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.082 10:23:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:43.082 00:14:43.082 real 0m18.995s 00:14:43.082 user 0m39.965s 00:14:43.082 sys 0m8.213s 00:14:43.082 10:23:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:43.082 10:23:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.082 ************************************ 00:14:43.082 END TEST nvmf_connect_stress 00:14:43.082 ************************************ 00:14:43.082 10:23:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:43.082 10:23:27 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:43.082 10:23:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:43.082 10:23:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:43.082 10:23:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:43.082 ************************************ 00:14:43.082 START TEST nvmf_fused_ordering 00:14:43.082 ************************************ 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:43.082 * Looking for test storage... 00:14:43.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:43.082 10:23:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:48.357 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:48.357 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:48.357 Found net devices under 0000:86:00.0: cvl_0_0 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:48.357 Found net devices under 0000:86:00.1: cvl_0_1 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:48.357 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:48.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:48.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:14:48.615 00:14:48.615 --- 10.0.0.2 ping statistics --- 00:14:48.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.615 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:14:48.615 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:48.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:48.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:14:48.615 00:14:48.615 --- 10.0.0.1 ping statistics --- 00:14:48.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.615 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:14:48.615 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:48.615 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:48.615 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:48.615 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:48.615 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:48.615 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:48.615 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:48.615 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:48.615 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:48.615 10:23:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:48.615 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:48.615 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:48.615 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:48.615 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2329174 00:14:48.615 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2329174 00:14:48.615 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 2329174 ']' 00:14:48.615 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:48.615 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.615 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:48.615 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.615 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:48.615 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:48.615 [2024-07-14 10:23:33.438028] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:48.616 [2024-07-14 10:23:33.438078] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.616 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.616 [2024-07-14 10:23:33.509678] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.616 [2024-07-14 10:23:33.549467] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.616 [2024-07-14 10:23:33.549505] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.616 [2024-07-14 10:23:33.549512] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:48.616 [2024-07-14 10:23:33.549517] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:48.616 [2024-07-14 10:23:33.549522] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.616 [2024-07-14 10:23:33.549539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:48.873 [2024-07-14 10:23:33.673830] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:48.873 [2024-07-14 10:23:33.697988] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:48.873 NULL1 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.873 10:23:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:48.873 [2024-07-14 10:23:33.750259] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:48.873 [2024-07-14 10:23:33.750288] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2329199 ] 00:14:48.873 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.131 Attached to nqn.2016-06.io.spdk:cnode1 00:14:49.131 Namespace ID: 1 size: 1GB 00:14:49.131 fused_ordering(0) 00:14:49.131 fused_ordering(1) 00:14:49.131 fused_ordering(2) 00:14:49.131 fused_ordering(3) 00:14:49.131 fused_ordering(4) 00:14:49.131 fused_ordering(5) 00:14:49.131 fused_ordering(6) 00:14:49.131 fused_ordering(7) 00:14:49.131 fused_ordering(8) 00:14:49.131 fused_ordering(9) 00:14:49.131 fused_ordering(10) 00:14:49.131 fused_ordering(11) 00:14:49.131 fused_ordering(12) 00:14:49.131 fused_ordering(13) 00:14:49.131 fused_ordering(14) 00:14:49.131 fused_ordering(15) 00:14:49.131 fused_ordering(16) 00:14:49.131 fused_ordering(17) 00:14:49.131 fused_ordering(18) 00:14:49.131 fused_ordering(19) 00:14:49.131 fused_ordering(20) 00:14:49.131 fused_ordering(21) 00:14:49.131 fused_ordering(22) 00:14:49.131 fused_ordering(23) 00:14:49.131 fused_ordering(24) 00:14:49.131 fused_ordering(25) 00:14:49.131 fused_ordering(26) 00:14:49.131 fused_ordering(27) 00:14:49.131 fused_ordering(28) 00:14:49.131 fused_ordering(29) 00:14:49.131 fused_ordering(30) 00:14:49.131 fused_ordering(31) 00:14:49.131 fused_ordering(32) 00:14:49.131 fused_ordering(33) 00:14:49.131 fused_ordering(34) 00:14:49.131 fused_ordering(35) 00:14:49.131 fused_ordering(36) 00:14:49.131 fused_ordering(37) 00:14:49.131 fused_ordering(38) 00:14:49.131 fused_ordering(39) 00:14:49.131 fused_ordering(40) 00:14:49.131 fused_ordering(41) 00:14:49.131 fused_ordering(42) 00:14:49.131 fused_ordering(43) 00:14:49.131 fused_ordering(44) 00:14:49.131 fused_ordering(45) 00:14:49.131 fused_ordering(46) 00:14:49.131 fused_ordering(47) 00:14:49.131 fused_ordering(48) 00:14:49.131 fused_ordering(49) 00:14:49.131 fused_ordering(50) 00:14:49.131 fused_ordering(51) 00:14:49.131 fused_ordering(52) 00:14:49.131 fused_ordering(53) 00:14:49.131 fused_ordering(54) 00:14:49.131 fused_ordering(55) 00:14:49.131 fused_ordering(56) 00:14:49.131 fused_ordering(57) 00:14:49.131 fused_ordering(58) 00:14:49.131 fused_ordering(59) 00:14:49.131 fused_ordering(60) 00:14:49.131 fused_ordering(61) 00:14:49.131 fused_ordering(62) 00:14:49.131 fused_ordering(63) 00:14:49.131 fused_ordering(64) 00:14:49.131 fused_ordering(65) 00:14:49.131 fused_ordering(66) 00:14:49.131 fused_ordering(67) 00:14:49.131 fused_ordering(68) 00:14:49.131 fused_ordering(69) 00:14:49.131 fused_ordering(70) 00:14:49.131 fused_ordering(71) 00:14:49.131 fused_ordering(72) 00:14:49.131 fused_ordering(73) 00:14:49.131 fused_ordering(74) 00:14:49.131 fused_ordering(75) 00:14:49.131 fused_ordering(76) 00:14:49.131 fused_ordering(77) 00:14:49.131 fused_ordering(78) 00:14:49.131 fused_ordering(79) 00:14:49.131 fused_ordering(80) 00:14:49.131 fused_ordering(81) 00:14:49.131 fused_ordering(82) 00:14:49.131 fused_ordering(83) 00:14:49.131 fused_ordering(84) 00:14:49.131 fused_ordering(85) 00:14:49.131 fused_ordering(86) 00:14:49.131 fused_ordering(87) 00:14:49.131 fused_ordering(88) 00:14:49.131 fused_ordering(89) 00:14:49.131 fused_ordering(90) 00:14:49.131 fused_ordering(91) 00:14:49.131 fused_ordering(92) 00:14:49.131 fused_ordering(93) 00:14:49.131 fused_ordering(94) 00:14:49.131 fused_ordering(95) 00:14:49.131 fused_ordering(96) 00:14:49.131 fused_ordering(97) 00:14:49.131 fused_ordering(98) 00:14:49.131 fused_ordering(99) 00:14:49.131 fused_ordering(100) 00:14:49.131 fused_ordering(101) 00:14:49.131 fused_ordering(102) 00:14:49.131 fused_ordering(103) 00:14:49.131 fused_ordering(104) 00:14:49.131 fused_ordering(105) 00:14:49.131 fused_ordering(106) 00:14:49.131 fused_ordering(107) 00:14:49.131 fused_ordering(108) 00:14:49.131 fused_ordering(109) 00:14:49.131 fused_ordering(110) 00:14:49.131 fused_ordering(111) 00:14:49.131 fused_ordering(112) 00:14:49.131 fused_ordering(113) 00:14:49.131 fused_ordering(114) 00:14:49.131 fused_ordering(115) 00:14:49.131 fused_ordering(116) 00:14:49.131 fused_ordering(117) 00:14:49.131 fused_ordering(118) 00:14:49.131 fused_ordering(119) 00:14:49.131 fused_ordering(120) 00:14:49.131 fused_ordering(121) 00:14:49.131 fused_ordering(122) 00:14:49.131 fused_ordering(123) 00:14:49.131 fused_ordering(124) 00:14:49.131 fused_ordering(125) 00:14:49.131 fused_ordering(126) 00:14:49.131 fused_ordering(127) 00:14:49.131 fused_ordering(128) 00:14:49.131 fused_ordering(129) 00:14:49.131 fused_ordering(130) 00:14:49.131 fused_ordering(131) 00:14:49.131 fused_ordering(132) 00:14:49.131 fused_ordering(133) 00:14:49.131 fused_ordering(134) 00:14:49.131 fused_ordering(135) 00:14:49.131 fused_ordering(136) 00:14:49.131 fused_ordering(137) 00:14:49.131 fused_ordering(138) 00:14:49.131 fused_ordering(139) 00:14:49.131 fused_ordering(140) 00:14:49.131 fused_ordering(141) 00:14:49.131 fused_ordering(142) 00:14:49.131 fused_ordering(143) 00:14:49.131 fused_ordering(144) 00:14:49.131 fused_ordering(145) 00:14:49.131 fused_ordering(146) 00:14:49.131 fused_ordering(147) 00:14:49.131 fused_ordering(148) 00:14:49.131 fused_ordering(149) 00:14:49.131 fused_ordering(150) 00:14:49.131 fused_ordering(151) 00:14:49.131 fused_ordering(152) 00:14:49.131 fused_ordering(153) 00:14:49.131 fused_ordering(154) 00:14:49.131 fused_ordering(155) 00:14:49.131 fused_ordering(156) 00:14:49.131 fused_ordering(157) 00:14:49.131 fused_ordering(158) 00:14:49.131 fused_ordering(159) 00:14:49.131 fused_ordering(160) 00:14:49.131 fused_ordering(161) 00:14:49.131 fused_ordering(162) 00:14:49.131 fused_ordering(163) 00:14:49.131 fused_ordering(164) 00:14:49.131 fused_ordering(165) 00:14:49.131 fused_ordering(166) 00:14:49.131 fused_ordering(167) 00:14:49.131 fused_ordering(168) 00:14:49.131 fused_ordering(169) 00:14:49.131 fused_ordering(170) 00:14:49.131 fused_ordering(171) 00:14:49.131 fused_ordering(172) 00:14:49.131 fused_ordering(173) 00:14:49.131 fused_ordering(174) 00:14:49.131 fused_ordering(175) 00:14:49.132 fused_ordering(176) 00:14:49.132 fused_ordering(177) 00:14:49.132 fused_ordering(178) 00:14:49.132 fused_ordering(179) 00:14:49.132 fused_ordering(180) 00:14:49.132 fused_ordering(181) 00:14:49.132 fused_ordering(182) 00:14:49.132 fused_ordering(183) 00:14:49.132 fused_ordering(184) 00:14:49.132 fused_ordering(185) 00:14:49.132 fused_ordering(186) 00:14:49.132 fused_ordering(187) 00:14:49.132 fused_ordering(188) 00:14:49.132 fused_ordering(189) 00:14:49.132 fused_ordering(190) 00:14:49.132 fused_ordering(191) 00:14:49.132 fused_ordering(192) 00:14:49.132 fused_ordering(193) 00:14:49.132 fused_ordering(194) 00:14:49.132 fused_ordering(195) 00:14:49.132 fused_ordering(196) 00:14:49.132 fused_ordering(197) 00:14:49.132 fused_ordering(198) 00:14:49.132 fused_ordering(199) 00:14:49.132 fused_ordering(200) 00:14:49.132 fused_ordering(201) 00:14:49.132 fused_ordering(202) 00:14:49.132 fused_ordering(203) 00:14:49.132 fused_ordering(204) 00:14:49.132 fused_ordering(205) 00:14:49.390 fused_ordering(206) 00:14:49.390 fused_ordering(207) 00:14:49.390 fused_ordering(208) 00:14:49.390 fused_ordering(209) 00:14:49.390 fused_ordering(210) 00:14:49.390 fused_ordering(211) 00:14:49.390 fused_ordering(212) 00:14:49.390 fused_ordering(213) 00:14:49.390 fused_ordering(214) 00:14:49.390 fused_ordering(215) 00:14:49.390 fused_ordering(216) 00:14:49.390 fused_ordering(217) 00:14:49.390 fused_ordering(218) 00:14:49.390 fused_ordering(219) 00:14:49.390 fused_ordering(220) 00:14:49.390 fused_ordering(221) 00:14:49.390 fused_ordering(222) 00:14:49.390 fused_ordering(223) 00:14:49.390 fused_ordering(224) 00:14:49.390 fused_ordering(225) 00:14:49.390 fused_ordering(226) 00:14:49.390 fused_ordering(227) 00:14:49.390 fused_ordering(228) 00:14:49.390 fused_ordering(229) 00:14:49.390 fused_ordering(230) 00:14:49.390 fused_ordering(231) 00:14:49.390 fused_ordering(232) 00:14:49.390 fused_ordering(233) 00:14:49.390 fused_ordering(234) 00:14:49.390 fused_ordering(235) 00:14:49.390 fused_ordering(236) 00:14:49.390 fused_ordering(237) 00:14:49.390 fused_ordering(238) 00:14:49.390 fused_ordering(239) 00:14:49.390 fused_ordering(240) 00:14:49.390 fused_ordering(241) 00:14:49.390 fused_ordering(242) 00:14:49.390 fused_ordering(243) 00:14:49.390 fused_ordering(244) 00:14:49.390 fused_ordering(245) 00:14:49.390 fused_ordering(246) 00:14:49.390 fused_ordering(247) 00:14:49.390 fused_ordering(248) 00:14:49.390 fused_ordering(249) 00:14:49.390 fused_ordering(250) 00:14:49.390 fused_ordering(251) 00:14:49.390 fused_ordering(252) 00:14:49.390 fused_ordering(253) 00:14:49.390 fused_ordering(254) 00:14:49.390 fused_ordering(255) 00:14:49.390 fused_ordering(256) 00:14:49.390 fused_ordering(257) 00:14:49.390 fused_ordering(258) 00:14:49.390 fused_ordering(259) 00:14:49.390 fused_ordering(260) 00:14:49.390 fused_ordering(261) 00:14:49.390 fused_ordering(262) 00:14:49.390 fused_ordering(263) 00:14:49.390 fused_ordering(264) 00:14:49.390 fused_ordering(265) 00:14:49.390 fused_ordering(266) 00:14:49.390 fused_ordering(267) 00:14:49.390 fused_ordering(268) 00:14:49.390 fused_ordering(269) 00:14:49.390 fused_ordering(270) 00:14:49.390 fused_ordering(271) 00:14:49.390 fused_ordering(272) 00:14:49.390 fused_ordering(273) 00:14:49.390 fused_ordering(274) 00:14:49.390 fused_ordering(275) 00:14:49.390 fused_ordering(276) 00:14:49.390 fused_ordering(277) 00:14:49.390 fused_ordering(278) 00:14:49.390 fused_ordering(279) 00:14:49.390 fused_ordering(280) 00:14:49.390 fused_ordering(281) 00:14:49.390 fused_ordering(282) 00:14:49.390 fused_ordering(283) 00:14:49.391 fused_ordering(284) 00:14:49.391 fused_ordering(285) 00:14:49.391 fused_ordering(286) 00:14:49.391 fused_ordering(287) 00:14:49.391 fused_ordering(288) 00:14:49.391 fused_ordering(289) 00:14:49.391 fused_ordering(290) 00:14:49.391 fused_ordering(291) 00:14:49.391 fused_ordering(292) 00:14:49.391 fused_ordering(293) 00:14:49.391 fused_ordering(294) 00:14:49.391 fused_ordering(295) 00:14:49.391 fused_ordering(296) 00:14:49.391 fused_ordering(297) 00:14:49.391 fused_ordering(298) 00:14:49.391 fused_ordering(299) 00:14:49.391 fused_ordering(300) 00:14:49.391 fused_ordering(301) 00:14:49.391 fused_ordering(302) 00:14:49.391 fused_ordering(303) 00:14:49.391 fused_ordering(304) 00:14:49.391 fused_ordering(305) 00:14:49.391 fused_ordering(306) 00:14:49.391 fused_ordering(307) 00:14:49.391 fused_ordering(308) 00:14:49.391 fused_ordering(309) 00:14:49.391 fused_ordering(310) 00:14:49.391 fused_ordering(311) 00:14:49.391 fused_ordering(312) 00:14:49.391 fused_ordering(313) 00:14:49.391 fused_ordering(314) 00:14:49.391 fused_ordering(315) 00:14:49.391 fused_ordering(316) 00:14:49.391 fused_ordering(317) 00:14:49.391 fused_ordering(318) 00:14:49.391 fused_ordering(319) 00:14:49.391 fused_ordering(320) 00:14:49.391 fused_ordering(321) 00:14:49.391 fused_ordering(322) 00:14:49.391 fused_ordering(323) 00:14:49.391 fused_ordering(324) 00:14:49.391 fused_ordering(325) 00:14:49.391 fused_ordering(326) 00:14:49.391 fused_ordering(327) 00:14:49.391 fused_ordering(328) 00:14:49.391 fused_ordering(329) 00:14:49.391 fused_ordering(330) 00:14:49.391 fused_ordering(331) 00:14:49.391 fused_ordering(332) 00:14:49.391 fused_ordering(333) 00:14:49.391 fused_ordering(334) 00:14:49.391 fused_ordering(335) 00:14:49.391 fused_ordering(336) 00:14:49.391 fused_ordering(337) 00:14:49.391 fused_ordering(338) 00:14:49.391 fused_ordering(339) 00:14:49.391 fused_ordering(340) 00:14:49.391 fused_ordering(341) 00:14:49.391 fused_ordering(342) 00:14:49.391 fused_ordering(343) 00:14:49.391 fused_ordering(344) 00:14:49.391 fused_ordering(345) 00:14:49.391 fused_ordering(346) 00:14:49.391 fused_ordering(347) 00:14:49.391 fused_ordering(348) 00:14:49.391 fused_ordering(349) 00:14:49.391 fused_ordering(350) 00:14:49.391 fused_ordering(351) 00:14:49.391 fused_ordering(352) 00:14:49.391 fused_ordering(353) 00:14:49.391 fused_ordering(354) 00:14:49.391 fused_ordering(355) 00:14:49.391 fused_ordering(356) 00:14:49.391 fused_ordering(357) 00:14:49.391 fused_ordering(358) 00:14:49.391 fused_ordering(359) 00:14:49.391 fused_ordering(360) 00:14:49.391 fused_ordering(361) 00:14:49.391 fused_ordering(362) 00:14:49.391 fused_ordering(363) 00:14:49.391 fused_ordering(364) 00:14:49.391 fused_ordering(365) 00:14:49.391 fused_ordering(366) 00:14:49.391 fused_ordering(367) 00:14:49.391 fused_ordering(368) 00:14:49.391 fused_ordering(369) 00:14:49.391 fused_ordering(370) 00:14:49.391 fused_ordering(371) 00:14:49.391 fused_ordering(372) 00:14:49.391 fused_ordering(373) 00:14:49.391 fused_ordering(374) 00:14:49.391 fused_ordering(375) 00:14:49.391 fused_ordering(376) 00:14:49.391 fused_ordering(377) 00:14:49.391 fused_ordering(378) 00:14:49.391 fused_ordering(379) 00:14:49.391 fused_ordering(380) 00:14:49.391 fused_ordering(381) 00:14:49.391 fused_ordering(382) 00:14:49.391 fused_ordering(383) 00:14:49.391 fused_ordering(384) 00:14:49.391 fused_ordering(385) 00:14:49.391 fused_ordering(386) 00:14:49.391 fused_ordering(387) 00:14:49.391 fused_ordering(388) 00:14:49.391 fused_ordering(389) 00:14:49.391 fused_ordering(390) 00:14:49.391 fused_ordering(391) 00:14:49.391 fused_ordering(392) 00:14:49.391 fused_ordering(393) 00:14:49.391 fused_ordering(394) 00:14:49.391 fused_ordering(395) 00:14:49.391 fused_ordering(396) 00:14:49.391 fused_ordering(397) 00:14:49.391 fused_ordering(398) 00:14:49.391 fused_ordering(399) 00:14:49.391 fused_ordering(400) 00:14:49.391 fused_ordering(401) 00:14:49.391 fused_ordering(402) 00:14:49.391 fused_ordering(403) 00:14:49.391 fused_ordering(404) 00:14:49.391 fused_ordering(405) 00:14:49.391 fused_ordering(406) 00:14:49.391 fused_ordering(407) 00:14:49.391 fused_ordering(408) 00:14:49.391 fused_ordering(409) 00:14:49.391 fused_ordering(410) 00:14:49.649 fused_ordering(411) 00:14:49.649 fused_ordering(412) 00:14:49.649 fused_ordering(413) 00:14:49.649 fused_ordering(414) 00:14:49.649 fused_ordering(415) 00:14:49.649 fused_ordering(416) 00:14:49.649 fused_ordering(417) 00:14:49.649 fused_ordering(418) 00:14:49.649 fused_ordering(419) 00:14:49.649 fused_ordering(420) 00:14:49.649 fused_ordering(421) 00:14:49.649 fused_ordering(422) 00:14:49.649 fused_ordering(423) 00:14:49.649 fused_ordering(424) 00:14:49.649 fused_ordering(425) 00:14:49.649 fused_ordering(426) 00:14:49.649 fused_ordering(427) 00:14:49.649 fused_ordering(428) 00:14:49.649 fused_ordering(429) 00:14:49.649 fused_ordering(430) 00:14:49.649 fused_ordering(431) 00:14:49.649 fused_ordering(432) 00:14:49.649 fused_ordering(433) 00:14:49.649 fused_ordering(434) 00:14:49.649 fused_ordering(435) 00:14:49.649 fused_ordering(436) 00:14:49.649 fused_ordering(437) 00:14:49.649 fused_ordering(438) 00:14:49.649 fused_ordering(439) 00:14:49.649 fused_ordering(440) 00:14:49.649 fused_ordering(441) 00:14:49.649 fused_ordering(442) 00:14:49.649 fused_ordering(443) 00:14:49.649 fused_ordering(444) 00:14:49.649 fused_ordering(445) 00:14:49.649 fused_ordering(446) 00:14:49.649 fused_ordering(447) 00:14:49.649 fused_ordering(448) 00:14:49.649 fused_ordering(449) 00:14:49.649 fused_ordering(450) 00:14:49.649 fused_ordering(451) 00:14:49.649 fused_ordering(452) 00:14:49.649 fused_ordering(453) 00:14:49.649 fused_ordering(454) 00:14:49.649 fused_ordering(455) 00:14:49.649 fused_ordering(456) 00:14:49.649 fused_ordering(457) 00:14:49.649 fused_ordering(458) 00:14:49.649 fused_ordering(459) 00:14:49.649 fused_ordering(460) 00:14:49.649 fused_ordering(461) 00:14:49.649 fused_ordering(462) 00:14:49.649 fused_ordering(463) 00:14:49.649 fused_ordering(464) 00:14:49.649 fused_ordering(465) 00:14:49.649 fused_ordering(466) 00:14:49.649 fused_ordering(467) 00:14:49.649 fused_ordering(468) 00:14:49.649 fused_ordering(469) 00:14:49.649 fused_ordering(470) 00:14:49.649 fused_ordering(471) 00:14:49.649 fused_ordering(472) 00:14:49.649 fused_ordering(473) 00:14:49.649 fused_ordering(474) 00:14:49.649 fused_ordering(475) 00:14:49.649 fused_ordering(476) 00:14:49.649 fused_ordering(477) 00:14:49.649 fused_ordering(478) 00:14:49.649 fused_ordering(479) 00:14:49.649 fused_ordering(480) 00:14:49.649 fused_ordering(481) 00:14:49.649 fused_ordering(482) 00:14:49.649 fused_ordering(483) 00:14:49.649 fused_ordering(484) 00:14:49.649 fused_ordering(485) 00:14:49.649 fused_ordering(486) 00:14:49.649 fused_ordering(487) 00:14:49.649 fused_ordering(488) 00:14:49.649 fused_ordering(489) 00:14:49.649 fused_ordering(490) 00:14:49.650 fused_ordering(491) 00:14:49.650 fused_ordering(492) 00:14:49.650 fused_ordering(493) 00:14:49.650 fused_ordering(494) 00:14:49.650 fused_ordering(495) 00:14:49.650 fused_ordering(496) 00:14:49.650 fused_ordering(497) 00:14:49.650 fused_ordering(498) 00:14:49.650 fused_ordering(499) 00:14:49.650 fused_ordering(500) 00:14:49.650 fused_ordering(501) 00:14:49.650 fused_ordering(502) 00:14:49.650 fused_ordering(503) 00:14:49.650 fused_ordering(504) 00:14:49.650 fused_ordering(505) 00:14:49.650 fused_ordering(506) 00:14:49.650 fused_ordering(507) 00:14:49.650 fused_ordering(508) 00:14:49.650 fused_ordering(509) 00:14:49.650 fused_ordering(510) 00:14:49.650 fused_ordering(511) 00:14:49.650 fused_ordering(512) 00:14:49.650 fused_ordering(513) 00:14:49.650 fused_ordering(514) 00:14:49.650 fused_ordering(515) 00:14:49.650 fused_ordering(516) 00:14:49.650 fused_ordering(517) 00:14:49.650 fused_ordering(518) 00:14:49.650 fused_ordering(519) 00:14:49.650 fused_ordering(520) 00:14:49.650 fused_ordering(521) 00:14:49.650 fused_ordering(522) 00:14:49.650 fused_ordering(523) 00:14:49.650 fused_ordering(524) 00:14:49.650 fused_ordering(525) 00:14:49.650 fused_ordering(526) 00:14:49.650 fused_ordering(527) 00:14:49.650 fused_ordering(528) 00:14:49.650 fused_ordering(529) 00:14:49.650 fused_ordering(530) 00:14:49.650 fused_ordering(531) 00:14:49.650 fused_ordering(532) 00:14:49.650 fused_ordering(533) 00:14:49.650 fused_ordering(534) 00:14:49.650 fused_ordering(535) 00:14:49.650 fused_ordering(536) 00:14:49.650 fused_ordering(537) 00:14:49.650 fused_ordering(538) 00:14:49.650 fused_ordering(539) 00:14:49.650 fused_ordering(540) 00:14:49.650 fused_ordering(541) 00:14:49.650 fused_ordering(542) 00:14:49.650 fused_ordering(543) 00:14:49.650 fused_ordering(544) 00:14:49.650 fused_ordering(545) 00:14:49.650 fused_ordering(546) 00:14:49.650 fused_ordering(547) 00:14:49.650 fused_ordering(548) 00:14:49.650 fused_ordering(549) 00:14:49.650 fused_ordering(550) 00:14:49.650 fused_ordering(551) 00:14:49.650 fused_ordering(552) 00:14:49.650 fused_ordering(553) 00:14:49.650 fused_ordering(554) 00:14:49.650 fused_ordering(555) 00:14:49.650 fused_ordering(556) 00:14:49.650 fused_ordering(557) 00:14:49.650 fused_ordering(558) 00:14:49.650 fused_ordering(559) 00:14:49.650 fused_ordering(560) 00:14:49.650 fused_ordering(561) 00:14:49.650 fused_ordering(562) 00:14:49.650 fused_ordering(563) 00:14:49.650 fused_ordering(564) 00:14:49.650 fused_ordering(565) 00:14:49.650 fused_ordering(566) 00:14:49.650 fused_ordering(567) 00:14:49.650 fused_ordering(568) 00:14:49.650 fused_ordering(569) 00:14:49.650 fused_ordering(570) 00:14:49.650 fused_ordering(571) 00:14:49.650 fused_ordering(572) 00:14:49.650 fused_ordering(573) 00:14:49.650 fused_ordering(574) 00:14:49.650 fused_ordering(575) 00:14:49.650 fused_ordering(576) 00:14:49.650 fused_ordering(577) 00:14:49.650 fused_ordering(578) 00:14:49.650 fused_ordering(579) 00:14:49.650 fused_ordering(580) 00:14:49.650 fused_ordering(581) 00:14:49.650 fused_ordering(582) 00:14:49.650 fused_ordering(583) 00:14:49.650 fused_ordering(584) 00:14:49.650 fused_ordering(585) 00:14:49.650 fused_ordering(586) 00:14:49.650 fused_ordering(587) 00:14:49.650 fused_ordering(588) 00:14:49.650 fused_ordering(589) 00:14:49.650 fused_ordering(590) 00:14:49.650 fused_ordering(591) 00:14:49.650 fused_ordering(592) 00:14:49.650 fused_ordering(593) 00:14:49.650 fused_ordering(594) 00:14:49.650 fused_ordering(595) 00:14:49.650 fused_ordering(596) 00:14:49.650 fused_ordering(597) 00:14:49.650 fused_ordering(598) 00:14:49.650 fused_ordering(599) 00:14:49.650 fused_ordering(600) 00:14:49.650 fused_ordering(601) 00:14:49.650 fused_ordering(602) 00:14:49.650 fused_ordering(603) 00:14:49.650 fused_ordering(604) 00:14:49.650 fused_ordering(605) 00:14:49.650 fused_ordering(606) 00:14:49.650 fused_ordering(607) 00:14:49.650 fused_ordering(608) 00:14:49.650 fused_ordering(609) 00:14:49.650 fused_ordering(610) 00:14:49.650 fused_ordering(611) 00:14:49.650 fused_ordering(612) 00:14:49.650 fused_ordering(613) 00:14:49.650 fused_ordering(614) 00:14:49.650 fused_ordering(615) 00:14:50.216 fused_ordering(616) 00:14:50.216 fused_ordering(617) 00:14:50.216 fused_ordering(618) 00:14:50.216 fused_ordering(619) 00:14:50.216 fused_ordering(620) 00:14:50.216 fused_ordering(621) 00:14:50.216 fused_ordering(622) 00:14:50.216 fused_ordering(623) 00:14:50.216 fused_ordering(624) 00:14:50.216 fused_ordering(625) 00:14:50.216 fused_ordering(626) 00:14:50.216 fused_ordering(627) 00:14:50.216 fused_ordering(628) 00:14:50.216 fused_ordering(629) 00:14:50.216 fused_ordering(630) 00:14:50.216 fused_ordering(631) 00:14:50.216 fused_ordering(632) 00:14:50.216 fused_ordering(633) 00:14:50.216 fused_ordering(634) 00:14:50.216 fused_ordering(635) 00:14:50.216 fused_ordering(636) 00:14:50.216 fused_ordering(637) 00:14:50.216 fused_ordering(638) 00:14:50.216 fused_ordering(639) 00:14:50.216 fused_ordering(640) 00:14:50.216 fused_ordering(641) 00:14:50.216 fused_ordering(642) 00:14:50.216 fused_ordering(643) 00:14:50.216 fused_ordering(644) 00:14:50.216 fused_ordering(645) 00:14:50.216 fused_ordering(646) 00:14:50.216 fused_ordering(647) 00:14:50.216 fused_ordering(648) 00:14:50.216 fused_ordering(649) 00:14:50.216 fused_ordering(650) 00:14:50.216 fused_ordering(651) 00:14:50.216 fused_ordering(652) 00:14:50.216 fused_ordering(653) 00:14:50.216 fused_ordering(654) 00:14:50.216 fused_ordering(655) 00:14:50.216 fused_ordering(656) 00:14:50.216 fused_ordering(657) 00:14:50.216 fused_ordering(658) 00:14:50.216 fused_ordering(659) 00:14:50.216 fused_ordering(660) 00:14:50.216 fused_ordering(661) 00:14:50.216 fused_ordering(662) 00:14:50.216 fused_ordering(663) 00:14:50.216 fused_ordering(664) 00:14:50.216 fused_ordering(665) 00:14:50.216 fused_ordering(666) 00:14:50.216 fused_ordering(667) 00:14:50.216 fused_ordering(668) 00:14:50.216 fused_ordering(669) 00:14:50.216 fused_ordering(670) 00:14:50.216 fused_ordering(671) 00:14:50.216 fused_ordering(672) 00:14:50.216 fused_ordering(673) 00:14:50.216 fused_ordering(674) 00:14:50.216 fused_ordering(675) 00:14:50.216 fused_ordering(676) 00:14:50.216 fused_ordering(677) 00:14:50.216 fused_ordering(678) 00:14:50.216 fused_ordering(679) 00:14:50.216 fused_ordering(680) 00:14:50.216 fused_ordering(681) 00:14:50.216 fused_ordering(682) 00:14:50.216 fused_ordering(683) 00:14:50.216 fused_ordering(684) 00:14:50.216 fused_ordering(685) 00:14:50.216 fused_ordering(686) 00:14:50.216 fused_ordering(687) 00:14:50.216 fused_ordering(688) 00:14:50.216 fused_ordering(689) 00:14:50.216 fused_ordering(690) 00:14:50.216 fused_ordering(691) 00:14:50.216 fused_ordering(692) 00:14:50.216 fused_ordering(693) 00:14:50.216 fused_ordering(694) 00:14:50.216 fused_ordering(695) 00:14:50.216 fused_ordering(696) 00:14:50.216 fused_ordering(697) 00:14:50.216 fused_ordering(698) 00:14:50.216 fused_ordering(699) 00:14:50.216 fused_ordering(700) 00:14:50.216 fused_ordering(701) 00:14:50.216 fused_ordering(702) 00:14:50.216 fused_ordering(703) 00:14:50.216 fused_ordering(704) 00:14:50.216 fused_ordering(705) 00:14:50.216 fused_ordering(706) 00:14:50.216 fused_ordering(707) 00:14:50.216 fused_ordering(708) 00:14:50.216 fused_ordering(709) 00:14:50.216 fused_ordering(710) 00:14:50.216 fused_ordering(711) 00:14:50.216 fused_ordering(712) 00:14:50.216 fused_ordering(713) 00:14:50.216 fused_ordering(714) 00:14:50.216 fused_ordering(715) 00:14:50.216 fused_ordering(716) 00:14:50.216 fused_ordering(717) 00:14:50.216 fused_ordering(718) 00:14:50.216 fused_ordering(719) 00:14:50.216 fused_ordering(720) 00:14:50.216 fused_ordering(721) 00:14:50.216 fused_ordering(722) 00:14:50.216 fused_ordering(723) 00:14:50.216 fused_ordering(724) 00:14:50.216 fused_ordering(725) 00:14:50.216 fused_ordering(726) 00:14:50.216 fused_ordering(727) 00:14:50.216 fused_ordering(728) 00:14:50.216 fused_ordering(729) 00:14:50.216 fused_ordering(730) 00:14:50.216 fused_ordering(731) 00:14:50.216 fused_ordering(732) 00:14:50.216 fused_ordering(733) 00:14:50.216 fused_ordering(734) 00:14:50.216 fused_ordering(735) 00:14:50.216 fused_ordering(736) 00:14:50.216 fused_ordering(737) 00:14:50.216 fused_ordering(738) 00:14:50.216 fused_ordering(739) 00:14:50.216 fused_ordering(740) 00:14:50.216 fused_ordering(741) 00:14:50.216 fused_ordering(742) 00:14:50.216 fused_ordering(743) 00:14:50.216 fused_ordering(744) 00:14:50.216 fused_ordering(745) 00:14:50.217 fused_ordering(746) 00:14:50.217 fused_ordering(747) 00:14:50.217 fused_ordering(748) 00:14:50.217 fused_ordering(749) 00:14:50.217 fused_ordering(750) 00:14:50.217 fused_ordering(751) 00:14:50.217 fused_ordering(752) 00:14:50.217 fused_ordering(753) 00:14:50.217 fused_ordering(754) 00:14:50.217 fused_ordering(755) 00:14:50.217 fused_ordering(756) 00:14:50.217 fused_ordering(757) 00:14:50.217 fused_ordering(758) 00:14:50.217 fused_ordering(759) 00:14:50.217 fused_ordering(760) 00:14:50.217 fused_ordering(761) 00:14:50.217 fused_ordering(762) 00:14:50.217 fused_ordering(763) 00:14:50.217 fused_ordering(764) 00:14:50.217 fused_ordering(765) 00:14:50.217 fused_ordering(766) 00:14:50.217 fused_ordering(767) 00:14:50.217 fused_ordering(768) 00:14:50.217 fused_ordering(769) 00:14:50.217 fused_ordering(770) 00:14:50.217 fused_ordering(771) 00:14:50.217 fused_ordering(772) 00:14:50.217 fused_ordering(773) 00:14:50.217 fused_ordering(774) 00:14:50.217 fused_ordering(775) 00:14:50.217 fused_ordering(776) 00:14:50.217 fused_ordering(777) 00:14:50.217 fused_ordering(778) 00:14:50.217 fused_ordering(779) 00:14:50.217 fused_ordering(780) 00:14:50.217 fused_ordering(781) 00:14:50.217 fused_ordering(782) 00:14:50.217 fused_ordering(783) 00:14:50.217 fused_ordering(784) 00:14:50.217 fused_ordering(785) 00:14:50.217 fused_ordering(786) 00:14:50.217 fused_ordering(787) 00:14:50.217 fused_ordering(788) 00:14:50.217 fused_ordering(789) 00:14:50.217 fused_ordering(790) 00:14:50.217 fused_ordering(791) 00:14:50.217 fused_ordering(792) 00:14:50.217 fused_ordering(793) 00:14:50.217 fused_ordering(794) 00:14:50.217 fused_ordering(795) 00:14:50.217 fused_ordering(796) 00:14:50.217 fused_ordering(797) 00:14:50.217 fused_ordering(798) 00:14:50.217 fused_ordering(799) 00:14:50.217 fused_ordering(800) 00:14:50.217 fused_ordering(801) 00:14:50.217 fused_ordering(802) 00:14:50.217 fused_ordering(803) 00:14:50.217 fused_ordering(804) 00:14:50.217 fused_ordering(805) 00:14:50.217 fused_ordering(806) 00:14:50.217 fused_ordering(807) 00:14:50.217 fused_ordering(808) 00:14:50.217 fused_ordering(809) 00:14:50.217 fused_ordering(810) 00:14:50.217 fused_ordering(811) 00:14:50.217 fused_ordering(812) 00:14:50.217 fused_ordering(813) 00:14:50.217 fused_ordering(814) 00:14:50.217 fused_ordering(815) 00:14:50.217 fused_ordering(816) 00:14:50.217 fused_ordering(817) 00:14:50.217 fused_ordering(818) 00:14:50.217 fused_ordering(819) 00:14:50.217 fused_ordering(820) 00:14:50.784 fused_ordering(821) 00:14:50.784 fused_ordering(822) 00:14:50.784 fused_ordering(823) 00:14:50.784 fused_ordering(824) 00:14:50.784 fused_ordering(825) 00:14:50.784 fused_ordering(826) 00:14:50.784 fused_ordering(827) 00:14:50.784 fused_ordering(828) 00:14:50.784 fused_ordering(829) 00:14:50.784 fused_ordering(830) 00:14:50.784 fused_ordering(831) 00:14:50.784 fused_ordering(832) 00:14:50.784 fused_ordering(833) 00:14:50.784 fused_ordering(834) 00:14:50.784 fused_ordering(835) 00:14:50.784 fused_ordering(836) 00:14:50.784 fused_ordering(837) 00:14:50.784 fused_ordering(838) 00:14:50.784 fused_ordering(839) 00:14:50.784 fused_ordering(840) 00:14:50.784 fused_ordering(841) 00:14:50.784 fused_ordering(842) 00:14:50.784 fused_ordering(843) 00:14:50.784 fused_ordering(844) 00:14:50.784 fused_ordering(845) 00:14:50.784 fused_ordering(846) 00:14:50.784 fused_ordering(847) 00:14:50.784 fused_ordering(848) 00:14:50.784 fused_ordering(849) 00:14:50.784 fused_ordering(850) 00:14:50.784 fused_ordering(851) 00:14:50.784 fused_ordering(852) 00:14:50.784 fused_ordering(853) 00:14:50.784 fused_ordering(854) 00:14:50.784 fused_ordering(855) 00:14:50.784 fused_ordering(856) 00:14:50.784 fused_ordering(857) 00:14:50.784 fused_ordering(858) 00:14:50.784 fused_ordering(859) 00:14:50.784 fused_ordering(860) 00:14:50.784 fused_ordering(861) 00:14:50.784 fused_ordering(862) 00:14:50.784 fused_ordering(863) 00:14:50.784 fused_ordering(864) 00:14:50.784 fused_ordering(865) 00:14:50.784 fused_ordering(866) 00:14:50.784 fused_ordering(867) 00:14:50.784 fused_ordering(868) 00:14:50.784 fused_ordering(869) 00:14:50.784 fused_ordering(870) 00:14:50.784 fused_ordering(871) 00:14:50.784 fused_ordering(872) 00:14:50.784 fused_ordering(873) 00:14:50.784 fused_ordering(874) 00:14:50.784 fused_ordering(875) 00:14:50.784 fused_ordering(876) 00:14:50.784 fused_ordering(877) 00:14:50.784 fused_ordering(878) 00:14:50.784 fused_ordering(879) 00:14:50.784 fused_ordering(880) 00:14:50.784 fused_ordering(881) 00:14:50.784 fused_ordering(882) 00:14:50.784 fused_ordering(883) 00:14:50.784 fused_ordering(884) 00:14:50.784 fused_ordering(885) 00:14:50.784 fused_ordering(886) 00:14:50.784 fused_ordering(887) 00:14:50.784 fused_ordering(888) 00:14:50.784 fused_ordering(889) 00:14:50.784 fused_ordering(890) 00:14:50.784 fused_ordering(891) 00:14:50.784 fused_ordering(892) 00:14:50.784 fused_ordering(893) 00:14:50.784 fused_ordering(894) 00:14:50.784 fused_ordering(895) 00:14:50.784 fused_ordering(896) 00:14:50.784 fused_ordering(897) 00:14:50.784 fused_ordering(898) 00:14:50.784 fused_ordering(899) 00:14:50.784 fused_ordering(900) 00:14:50.784 fused_ordering(901) 00:14:50.784 fused_ordering(902) 00:14:50.784 fused_ordering(903) 00:14:50.784 fused_ordering(904) 00:14:50.784 fused_ordering(905) 00:14:50.784 fused_ordering(906) 00:14:50.784 fused_ordering(907) 00:14:50.784 fused_ordering(908) 00:14:50.784 fused_ordering(909) 00:14:50.784 fused_ordering(910) 00:14:50.784 fused_ordering(911) 00:14:50.784 fused_ordering(912) 00:14:50.784 fused_ordering(913) 00:14:50.784 fused_ordering(914) 00:14:50.784 fused_ordering(915) 00:14:50.784 fused_ordering(916) 00:14:50.784 fused_ordering(917) 00:14:50.784 fused_ordering(918) 00:14:50.784 fused_ordering(919) 00:14:50.784 fused_ordering(920) 00:14:50.784 fused_ordering(921) 00:14:50.784 fused_ordering(922) 00:14:50.784 fused_ordering(923) 00:14:50.784 fused_ordering(924) 00:14:50.784 fused_ordering(925) 00:14:50.784 fused_ordering(926) 00:14:50.784 fused_ordering(927) 00:14:50.784 fused_ordering(928) 00:14:50.784 fused_ordering(929) 00:14:50.784 fused_ordering(930) 00:14:50.784 fused_ordering(931) 00:14:50.784 fused_ordering(932) 00:14:50.784 fused_ordering(933) 00:14:50.784 fused_ordering(934) 00:14:50.784 fused_ordering(935) 00:14:50.784 fused_ordering(936) 00:14:50.784 fused_ordering(937) 00:14:50.784 fused_ordering(938) 00:14:50.784 fused_ordering(939) 00:14:50.784 fused_ordering(940) 00:14:50.784 fused_ordering(941) 00:14:50.784 fused_ordering(942) 00:14:50.784 fused_ordering(943) 00:14:50.784 fused_ordering(944) 00:14:50.784 fused_ordering(945) 00:14:50.784 fused_ordering(946) 00:14:50.784 fused_ordering(947) 00:14:50.784 fused_ordering(948) 00:14:50.784 fused_ordering(949) 00:14:50.784 fused_ordering(950) 00:14:50.784 fused_ordering(951) 00:14:50.784 fused_ordering(952) 00:14:50.784 fused_ordering(953) 00:14:50.784 fused_ordering(954) 00:14:50.784 fused_ordering(955) 00:14:50.784 fused_ordering(956) 00:14:50.784 fused_ordering(957) 00:14:50.784 fused_ordering(958) 00:14:50.784 fused_ordering(959) 00:14:50.784 fused_ordering(960) 00:14:50.784 fused_ordering(961) 00:14:50.784 fused_ordering(962) 00:14:50.784 fused_ordering(963) 00:14:50.784 fused_ordering(964) 00:14:50.784 fused_ordering(965) 00:14:50.784 fused_ordering(966) 00:14:50.784 fused_ordering(967) 00:14:50.784 fused_ordering(968) 00:14:50.784 fused_ordering(969) 00:14:50.784 fused_ordering(970) 00:14:50.784 fused_ordering(971) 00:14:50.784 fused_ordering(972) 00:14:50.784 fused_ordering(973) 00:14:50.784 fused_ordering(974) 00:14:50.784 fused_ordering(975) 00:14:50.784 fused_ordering(976) 00:14:50.784 fused_ordering(977) 00:14:50.784 fused_ordering(978) 00:14:50.784 fused_ordering(979) 00:14:50.784 fused_ordering(980) 00:14:50.784 fused_ordering(981) 00:14:50.784 fused_ordering(982) 00:14:50.784 fused_ordering(983) 00:14:50.784 fused_ordering(984) 00:14:50.784 fused_ordering(985) 00:14:50.784 fused_ordering(986) 00:14:50.784 fused_ordering(987) 00:14:50.784 fused_ordering(988) 00:14:50.784 fused_ordering(989) 00:14:50.784 fused_ordering(990) 00:14:50.784 fused_ordering(991) 00:14:50.784 fused_ordering(992) 00:14:50.784 fused_ordering(993) 00:14:50.784 fused_ordering(994) 00:14:50.784 fused_ordering(995) 00:14:50.784 fused_ordering(996) 00:14:50.784 fused_ordering(997) 00:14:50.784 fused_ordering(998) 00:14:50.784 fused_ordering(999) 00:14:50.784 fused_ordering(1000) 00:14:50.784 fused_ordering(1001) 00:14:50.784 fused_ordering(1002) 00:14:50.784 fused_ordering(1003) 00:14:50.784 fused_ordering(1004) 00:14:50.784 fused_ordering(1005) 00:14:50.784 fused_ordering(1006) 00:14:50.784 fused_ordering(1007) 00:14:50.784 fused_ordering(1008) 00:14:50.784 fused_ordering(1009) 00:14:50.784 fused_ordering(1010) 00:14:50.784 fused_ordering(1011) 00:14:50.784 fused_ordering(1012) 00:14:50.784 fused_ordering(1013) 00:14:50.784 fused_ordering(1014) 00:14:50.784 fused_ordering(1015) 00:14:50.785 fused_ordering(1016) 00:14:50.785 fused_ordering(1017) 00:14:50.785 fused_ordering(1018) 00:14:50.785 fused_ordering(1019) 00:14:50.785 fused_ordering(1020) 00:14:50.785 fused_ordering(1021) 00:14:50.785 fused_ordering(1022) 00:14:50.785 fused_ordering(1023) 00:14:50.785 10:23:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:50.785 10:23:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:50.785 10:23:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:50.785 10:23:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:50.785 10:23:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:50.785 10:23:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:50.785 10:23:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:50.785 10:23:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:50.785 rmmod nvme_tcp 00:14:50.785 rmmod nvme_fabrics 00:14:50.785 rmmod nvme_keyring 00:14:50.785 10:23:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:50.785 10:23:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:50.785 10:23:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:50.785 10:23:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2329174 ']' 00:14:50.785 10:23:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2329174 00:14:50.785 10:23:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 2329174 ']' 00:14:50.785 10:23:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 2329174 00:14:50.785 10:23:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:14:50.785 10:23:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:50.785 10:23:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2329174 00:14:50.785 10:23:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:50.785 10:23:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:50.785 10:23:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2329174' 00:14:50.785 killing process with pid 2329174 00:14:50.785 10:23:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 2329174 00:14:50.785 10:23:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 2329174 00:14:51.043 10:23:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:51.043 10:23:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:51.043 10:23:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:51.043 10:23:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:51.043 10:23:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:51.043 10:23:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.043 10:23:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.043 10:23:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.949 10:23:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:52.949 00:14:52.949 real 0m10.339s 00:14:52.949 user 0m4.681s 00:14:52.949 sys 0m5.736s 00:14:52.949 10:23:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:52.949 10:23:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:52.949 ************************************ 00:14:52.949 END TEST nvmf_fused_ordering 00:14:52.949 ************************************ 00:14:52.949 10:23:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:52.949 10:23:37 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:52.949 10:23:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:52.949 10:23:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:52.949 10:23:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:52.949 ************************************ 00:14:52.949 START TEST nvmf_delete_subsystem 00:14:52.949 ************************************ 00:14:52.949 10:23:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:53.209 * Looking for test storage... 00:14:53.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:53.209 10:23:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:59.782 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:59.782 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:59.782 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:59.782 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:59.782 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:59.782 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:59.782 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:59.782 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:59.782 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:59.782 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:59.782 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:59.782 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:59.783 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:59.783 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:59.783 Found net devices under 0000:86:00.0: cvl_0_0 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:59.783 Found net devices under 0000:86:00.1: cvl_0_1 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:59.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:59.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:14:59.783 00:14:59.783 --- 10.0.0.2 ping statistics --- 00:14:59.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.783 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:59.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:59.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:14:59.783 00:14:59.783 --- 10.0.0.1 ping statistics --- 00:14:59.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.783 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2332947 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2332947 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 2332947 ']' 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:59.783 10:23:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:59.783 [2024-07-14 10:23:43.885062] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:59.783 [2024-07-14 10:23:43.885108] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.783 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.783 [2024-07-14 10:23:43.956240] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:59.784 [2024-07-14 10:23:43.997375] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.784 [2024-07-14 10:23:43.997413] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.784 [2024-07-14 10:23:43.997421] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.784 [2024-07-14 10:23:43.997427] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.784 [2024-07-14 10:23:43.997432] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.784 [2024-07-14 10:23:43.997482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.784 [2024-07-14 10:23:43.997482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.784 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:59.784 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:14:59.784 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:59.784 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:59.784 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:59.784 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:59.784 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:59.784 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.784 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:59.784 [2024-07-14 10:23:44.728506] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:59.784 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.784 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:59.784 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.784 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:59.784 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.784 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:59.784 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.784 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:59.784 [2024-07-14 10:23:44.748657] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:59.784 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.784 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:59.784 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.784 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:59.784 NULL1 00:14:59.784 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.784 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:59.784 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.784 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:00.042 Delay0 00:15:00.042 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.042 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:00.042 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.042 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:00.042 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.042 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2333194 00:15:00.042 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:00.042 10:23:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:00.042 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.042 [2024-07-14 10:23:44.839343] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:01.940 10:23:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:01.940 10:23:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.940 10:23:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 starting I/O failed: -6 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 starting I/O failed: -6 00:15:02.199 Write completed with error (sct=0, sc=8) 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 Write completed with error (sct=0, sc=8) 00:15:02.199 starting I/O failed: -6 00:15:02.199 Write completed with error (sct=0, sc=8) 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 starting I/O failed: -6 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 Write completed with error (sct=0, sc=8) 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 starting I/O failed: -6 00:15:02.199 Write completed with error (sct=0, sc=8) 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 starting I/O failed: -6 00:15:02.199 Write completed with error (sct=0, sc=8) 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 Write completed with error (sct=0, sc=8) 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 starting I/O failed: -6 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 starting I/O failed: -6 00:15:02.199 Write completed with error (sct=0, sc=8) 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 Write completed with error (sct=0, sc=8) 00:15:02.199 starting I/O failed: -6 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 Write completed with error (sct=0, sc=8) 00:15:02.199 Write completed with error (sct=0, sc=8) 00:15:02.199 Write completed with error (sct=0, sc=8) 00:15:02.199 starting I/O failed: -6 00:15:02.199 [2024-07-14 10:23:47.085867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf515f0 is same with the state(5) to be set 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 Write completed with error (sct=0, sc=8) 00:15:02.199 Write completed with error (sct=0, sc=8) 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 Write completed with error (sct=0, sc=8) 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.199 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 [2024-07-14 10:23:47.086646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf51050 is same with the state(5) to be set 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 starting I/O failed: -6 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 starting I/O failed: -6 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 starting I/O failed: -6 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 starting I/O failed: -6 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 starting I/O failed: -6 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 starting I/O failed: -6 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 starting I/O failed: -6 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 starting I/O failed: -6 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 starting I/O failed: -6 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 starting I/O failed: -6 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Read completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 Write completed with error (sct=0, sc=8) 00:15:02.200 starting I/O failed: -6 00:15:02.200 [2024-07-14 10:23:47.087541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcba800d450 is same with the state(5) to be set 00:15:03.137 [2024-07-14 10:23:48.058102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4f330 is same with the state(5) to be set 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Write completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Write completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Write completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 [2024-07-14 10:23:48.087506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcba8000c00 is same with the state(5) to be set 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Write completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Write completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 [2024-07-14 10:23:48.089001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf51230 is same with the state(5) to be set 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Write completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Write completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Write completed with error (sct=0, sc=8) 00:15:03.137 Write completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Write completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Write completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 [2024-07-14 10:23:48.089965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcba800cfe0 is same with the state(5) to be set 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Write completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Write completed with error (sct=0, sc=8) 00:15:03.137 Write completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Write completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Write completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Write completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Read completed with error (sct=0, sc=8) 00:15:03.137 Write completed with error (sct=0, sc=8) 00:15:03.137 Write completed with error (sct=0, sc=8) 00:15:03.137 Write completed with error (sct=0, sc=8) 00:15:03.137 [2024-07-14 10:23:48.090122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcba800d760 is same with the state(5) to be set 00:15:03.137 Initializing NVMe Controllers 00:15:03.137 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:03.137 Controller IO queue size 128, less than required. 00:15:03.137 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:03.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:03.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:03.137 Initialization complete. Launching workers. 00:15:03.137 ======================================================== 00:15:03.137 Latency(us) 00:15:03.137 Device Information : IOPS MiB/s Average min max 00:15:03.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 158.20 0.08 865760.55 286.04 1008758.16 00:15:03.138 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 167.15 0.08 1015871.79 486.53 2001085.46 00:15:03.138 ======================================================== 00:15:03.138 Total : 325.35 0.16 942881.92 286.04 2001085.46 00:15:03.138 00:15:03.138 [2024-07-14 10:23:48.090593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4f330 (9): Bad file descriptor 00:15:03.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:03.138 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.138 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:15:03.138 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2333194 00:15:03.138 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:03.703 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:03.703 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2333194 00:15:03.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2333194) - No such process 00:15:03.703 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2333194 00:15:03.703 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:15:03.704 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2333194 00:15:03.704 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:15:03.704 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:03.704 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:15:03.704 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:03.704 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2333194 00:15:03.704 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:15:03.704 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:03.704 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:03.704 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:03.704 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:03.704 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.704 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:03.704 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.704 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:03.704 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.704 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:03.704 [2024-07-14 10:23:48.616639] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:03.704 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.704 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:03.704 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.704 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:03.704 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.704 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2333874 00:15:03.704 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:15:03.704 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:03.704 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2333874 00:15:03.704 10:23:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:03.704 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.962 [2024-07-14 10:23:48.692238] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:04.222 10:23:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:04.222 10:23:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2333874 00:15:04.222 10:23:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:04.825 10:23:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:04.825 10:23:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2333874 00:15:04.825 10:23:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:05.393 10:23:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:05.393 10:23:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2333874 00:15:05.393 10:23:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:05.959 10:23:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:05.959 10:23:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2333874 00:15:05.959 10:23:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:06.219 10:23:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:06.219 10:23:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2333874 00:15:06.219 10:23:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:06.786 10:23:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:06.786 10:23:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2333874 00:15:06.786 10:23:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:07.044 Initializing NVMe Controllers 00:15:07.044 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:07.044 Controller IO queue size 128, less than required. 00:15:07.044 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:07.044 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:07.044 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:07.044 Initialization complete. Launching workers. 00:15:07.044 ======================================================== 00:15:07.044 Latency(us) 00:15:07.044 Device Information : IOPS MiB/s Average min max 00:15:07.044 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002268.37 1000156.75 1007167.00 00:15:07.044 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003604.85 1000173.35 1009997.13 00:15:07.044 ======================================================== 00:15:07.044 Total : 256.00 0.12 1002936.61 1000156.75 1009997.13 00:15:07.044 00:15:07.303 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:07.303 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2333874 00:15:07.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2333874) - No such process 00:15:07.303 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2333874 00:15:07.303 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:07.303 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:07.303 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:07.303 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:15:07.303 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:07.303 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:15:07.303 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:07.303 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:07.303 rmmod nvme_tcp 00:15:07.303 rmmod nvme_fabrics 00:15:07.303 rmmod nvme_keyring 00:15:07.303 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:07.303 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:15:07.303 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:15:07.303 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2332947 ']' 00:15:07.303 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2332947 00:15:07.303 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 2332947 ']' 00:15:07.303 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 2332947 00:15:07.303 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:15:07.303 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:07.303 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2332947 00:15:07.303 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:07.303 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:07.303 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2332947' 00:15:07.303 killing process with pid 2332947 00:15:07.303 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 2332947 00:15:07.303 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 2332947 00:15:07.561 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:07.561 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:07.561 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:07.561 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:07.561 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:07.561 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.561 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.561 10:23:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.093 10:23:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:10.093 00:15:10.093 real 0m16.588s 00:15:10.093 user 0m30.718s 00:15:10.093 sys 0m5.282s 00:15:10.093 10:23:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:10.093 10:23:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:10.093 ************************************ 00:15:10.093 END TEST nvmf_delete_subsystem 00:15:10.093 ************************************ 00:15:10.093 10:23:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:10.093 10:23:54 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:10.093 10:23:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:10.093 10:23:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:10.093 10:23:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:10.093 ************************************ 00:15:10.093 START TEST nvmf_ns_masking 00:15:10.093 ************************************ 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:10.093 * Looking for test storage... 00:15:10.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=c9e12bfa-4019-4002-9563-ef60af08c96e 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=fbfc4241-cbcb-4565-8288-bbc471839efd 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:10.093 10:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:10.094 10:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:10.094 10:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:10.094 10:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=061dda91-40d9-40e3-a1a0-1651a80c0b17 00:15:10.094 10:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:10.094 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:10.094 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:10.094 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:10.094 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:10.094 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:10.094 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.094 10:23:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:10.094 10:23:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.094 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:10.094 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:10.094 10:23:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:15:10.094 10:23:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:15.369 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:15.369 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:15:15.369 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:15.369 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:15.369 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:15.369 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:15.369 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:15.369 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:15:15.369 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:15.369 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:15:15.369 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:15:15.369 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:15:15.369 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:15:15.369 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:15:15.369 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:15:15.369 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:15.369 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:15.369 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:15.370 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:15.370 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:15.370 Found net devices under 0000:86:00.0: cvl_0_0 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:15.370 Found net devices under 0000:86:00.1: cvl_0_1 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:15.370 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:15.630 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:15.630 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:15.630 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:15.630 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:15.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:15.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:15:15.630 00:15:15.630 --- 10.0.0.2 ping statistics --- 00:15:15.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.630 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:15:15.630 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:15.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:15.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:15:15.630 00:15:15.630 --- 10.0.0.1 ping statistics --- 00:15:15.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.630 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:15:15.630 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:15.630 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:15:15.630 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:15.630 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:15.630 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:15.630 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:15.630 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:15.630 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:15.630 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:15.630 10:24:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:15.630 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:15.630 10:24:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:15.630 10:24:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:15.630 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2337905 00:15:15.630 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2337905 00:15:15.630 10:24:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:15.630 10:24:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2337905 ']' 00:15:15.630 10:24:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.630 10:24:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:15.630 10:24:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.630 10:24:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:15.630 10:24:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:15.630 [2024-07-14 10:24:00.554768] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:15.630 [2024-07-14 10:24:00.554812] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.630 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.889 [2024-07-14 10:24:00.625875] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.889 [2024-07-14 10:24:00.665807] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.889 [2024-07-14 10:24:00.665846] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.889 [2024-07-14 10:24:00.665854] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:15.889 [2024-07-14 10:24:00.665860] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:15.889 [2024-07-14 10:24:00.665866] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.889 [2024-07-14 10:24:00.665885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.464 10:24:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:16.464 10:24:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:15:16.464 10:24:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:16.464 10:24:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:16.464 10:24:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:16.464 10:24:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:16.464 10:24:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:16.721 [2024-07-14 10:24:01.553010] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:16.721 10:24:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:16.721 10:24:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:16.721 10:24:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:16.979 Malloc1 00:15:16.979 10:24:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:16.979 Malloc2 00:15:17.238 10:24:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:17.238 10:24:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:17.496 10:24:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:17.755 [2024-07-14 10:24:02.495695] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:17.755 10:24:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:17.755 10:24:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 061dda91-40d9-40e3-a1a0-1651a80c0b17 -a 10.0.0.2 -s 4420 -i 4 00:15:17.755 10:24:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:17.755 10:24:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:17.755 10:24:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:17.755 10:24:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:17.756 10:24:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:20.291 10:24:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:20.291 10:24:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:20.291 10:24:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:20.291 10:24:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:20.291 10:24:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:20.291 10:24:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:20.291 10:24:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:20.291 10:24:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:20.291 10:24:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:20.291 10:24:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:20.291 10:24:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:20.291 10:24:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:20.291 10:24:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:20.291 [ 0]:0x1 00:15:20.291 10:24:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:20.291 10:24:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:20.291 10:24:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e2387fe691c43449a6b2f39c60aaed3 00:15:20.291 10:24:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e2387fe691c43449a6b2f39c60aaed3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:20.291 10:24:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:20.291 10:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:20.291 10:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:20.291 10:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:20.292 [ 0]:0x1 00:15:20.292 10:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:20.292 10:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:20.292 10:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e2387fe691c43449a6b2f39c60aaed3 00:15:20.292 10:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e2387fe691c43449a6b2f39c60aaed3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:20.292 10:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:20.292 10:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:20.292 10:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:20.292 [ 1]:0x2 00:15:20.292 10:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:20.292 10:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:20.292 10:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=982e9c2f0bb649eb917d040fc0257316 00:15:20.292 10:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 982e9c2f0bb649eb917d040fc0257316 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:20.292 10:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:20.292 10:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:20.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.551 10:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.810 10:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:21.069 10:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:21.069 10:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 061dda91-40d9-40e3-a1a0-1651a80c0b17 -a 10.0.0.2 -s 4420 -i 4 00:15:21.069 10:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:21.069 10:24:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:21.069 10:24:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:21.069 10:24:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:15:21.069 10:24:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:15:21.069 10:24:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:23.605 10:24:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:23.605 10:24:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:23.605 10:24:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:23.605 10:24:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:23.605 10:24:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:23.605 10:24:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:23.605 10:24:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:23.605 10:24:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:23.605 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:23.605 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:23.605 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:23.605 10:24:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:23.605 10:24:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:23.605 10:24:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:23.605 10:24:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:23.605 10:24:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:23.605 10:24:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:23.605 10:24:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:23.605 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:23.605 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:23.605 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:23.605 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:23.605 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:23.605 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:23.605 10:24:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:23.605 10:24:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:23.605 10:24:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:23.605 10:24:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:23.605 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:23.605 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:23.605 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:23.605 [ 0]:0x2 00:15:23.605 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:23.605 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:23.605 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=982e9c2f0bb649eb917d040fc0257316 00:15:23.605 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 982e9c2f0bb649eb917d040fc0257316 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:23.605 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:23.606 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:23.606 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:23.606 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:23.606 [ 0]:0x1 00:15:23.606 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:23.606 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:23.606 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e2387fe691c43449a6b2f39c60aaed3 00:15:23.606 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e2387fe691c43449a6b2f39c60aaed3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:23.606 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:23.606 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:23.606 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:23.606 [ 1]:0x2 00:15:23.606 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:23.606 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:23.606 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=982e9c2f0bb649eb917d040fc0257316 00:15:23.606 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 982e9c2f0bb649eb917d040fc0257316 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:23.606 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:23.865 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:23.865 10:24:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:23.865 10:24:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:23.865 10:24:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:23.865 10:24:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:23.865 10:24:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:23.865 10:24:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:23.865 10:24:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:23.865 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:23.865 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:23.865 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:23.865 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:23.865 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:23.865 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:23.865 10:24:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:23.865 10:24:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:23.865 10:24:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:23.865 10:24:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:23.865 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:23.865 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:23.865 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:23.865 [ 0]:0x2 00:15:23.865 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:23.865 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:23.865 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=982e9c2f0bb649eb917d040fc0257316 00:15:23.865 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 982e9c2f0bb649eb917d040fc0257316 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:23.865 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:23.865 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:23.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.865 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:24.124 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:24.124 10:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 061dda91-40d9-40e3-a1a0-1651a80c0b17 -a 10.0.0.2 -s 4420 -i 4 00:15:24.124 10:24:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:24.124 10:24:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:24.124 10:24:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:24.124 10:24:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:24.124 10:24:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:24.124 10:24:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:26.060 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:26.060 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:26.060 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:26.319 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:26.319 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:26.319 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:26.319 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:26.319 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:26.319 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:26.319 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:26.319 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:26.319 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:26.319 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:26.319 [ 0]:0x1 00:15:26.319 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:26.319 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:26.319 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e2387fe691c43449a6b2f39c60aaed3 00:15:26.319 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e2387fe691c43449a6b2f39c60aaed3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:26.319 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:26.319 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:26.319 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:26.319 [ 1]:0x2 00:15:26.319 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:26.319 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:26.319 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=982e9c2f0bb649eb917d040fc0257316 00:15:26.319 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 982e9c2f0bb649eb917d040fc0257316 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:26.319 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:26.578 [ 0]:0x2 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=982e9c2f0bb649eb917d040fc0257316 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 982e9c2f0bb649eb917d040fc0257316 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.578 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:26.579 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:26.579 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:26.838 [2024-07-14 10:24:11.681755] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:26.838 request: 00:15:26.838 { 00:15:26.838 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:26.838 "nsid": 2, 00:15:26.838 "host": "nqn.2016-06.io.spdk:host1", 00:15:26.838 "method": "nvmf_ns_remove_host", 00:15:26.838 "req_id": 1 00:15:26.838 } 00:15:26.838 Got JSON-RPC error response 00:15:26.838 response: 00:15:26.838 { 00:15:26.838 "code": -32602, 00:15:26.838 "message": "Invalid parameters" 00:15:26.838 } 00:15:26.838 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:26.838 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:26.838 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:26.838 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:26.838 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:26.838 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:26.838 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:26.838 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:26.838 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.838 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:26.838 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.838 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:26.838 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:26.838 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:26.838 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:26.838 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:26.838 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:26.838 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:26.838 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:26.838 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:26.838 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:26.838 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:26.838 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:26.838 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:26.838 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:27.097 [ 0]:0x2 00:15:27.097 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:27.097 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:27.097 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=982e9c2f0bb649eb917d040fc0257316 00:15:27.097 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 982e9c2f0bb649eb917d040fc0257316 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:27.097 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:27.097 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:27.097 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.097 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2340393 00:15:27.097 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:27.097 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2340393 /var/tmp/host.sock 00:15:27.097 10:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:27.097 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2340393 ']' 00:15:27.097 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:15:27.097 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:27.097 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:27.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:27.097 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:27.097 10:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:27.097 [2024-07-14 10:24:12.042323] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:27.097 [2024-07-14 10:24:12.042367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2340393 ] 00:15:27.097 EAL: No free 2048 kB hugepages reported on node 1 00:15:27.356 [2024-07-14 10:24:12.112544] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.356 [2024-07-14 10:24:12.152822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.615 10:24:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:27.615 10:24:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:15:27.615 10:24:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:27.615 10:24:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:27.874 10:24:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid c9e12bfa-4019-4002-9563-ef60af08c96e 00:15:27.874 10:24:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:27.874 10:24:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C9E12BFA401940029563EF60AF08C96E -i 00:15:28.133 10:24:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid fbfc4241-cbcb-4565-8288-bbc471839efd 00:15:28.133 10:24:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:28.133 10:24:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g FBFC4241CBCB45658288BBC471839EFD -i 00:15:28.133 10:24:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:28.415 10:24:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:28.673 10:24:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:28.673 10:24:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:28.673 nvme0n1 00:15:28.932 10:24:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:28.932 10:24:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:28.932 nvme1n2 00:15:29.190 10:24:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:29.190 10:24:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:29.190 10:24:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:29.190 10:24:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:29.190 10:24:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:29.190 10:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:29.190 10:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:29.190 10:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:29.190 10:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:29.448 10:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ c9e12bfa-4019-4002-9563-ef60af08c96e == \c\9\e\1\2\b\f\a\-\4\0\1\9\-\4\0\0\2\-\9\5\6\3\-\e\f\6\0\a\f\0\8\c\9\6\e ]] 00:15:29.448 10:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:29.448 10:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:29.448 10:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:29.707 10:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ fbfc4241-cbcb-4565-8288-bbc471839efd == \f\b\f\c\4\2\4\1\-\c\b\c\b\-\4\5\6\5\-\8\2\8\8\-\b\b\c\4\7\1\8\3\9\e\f\d ]] 00:15:29.707 10:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2340393 00:15:29.707 10:24:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2340393 ']' 00:15:29.707 10:24:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2340393 00:15:29.707 10:24:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:15:29.707 10:24:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:29.707 10:24:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2340393 00:15:29.707 10:24:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:29.707 10:24:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:29.707 10:24:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2340393' 00:15:29.707 killing process with pid 2340393 00:15:29.707 10:24:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2340393 00:15:29.707 10:24:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2340393 00:15:29.966 10:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:30.224 10:24:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:15:30.224 10:24:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:15:30.224 10:24:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:30.224 10:24:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:15:30.224 10:24:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:30.224 10:24:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:15:30.224 10:24:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:30.224 10:24:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:30.224 rmmod nvme_tcp 00:15:30.224 rmmod nvme_fabrics 00:15:30.224 rmmod nvme_keyring 00:15:30.224 10:24:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:30.224 10:24:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:15:30.224 10:24:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:15:30.224 10:24:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2337905 ']' 00:15:30.224 10:24:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2337905 00:15:30.225 10:24:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2337905 ']' 00:15:30.225 10:24:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2337905 00:15:30.225 10:24:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:15:30.225 10:24:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:30.225 10:24:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2337905 00:15:30.225 10:24:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:30.225 10:24:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:30.225 10:24:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2337905' 00:15:30.225 killing process with pid 2337905 00:15:30.225 10:24:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2337905 00:15:30.225 10:24:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2337905 00:15:30.484 10:24:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:30.484 10:24:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:30.484 10:24:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:30.484 10:24:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:30.484 10:24:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:30.484 10:24:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.484 10:24:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.484 10:24:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.023 10:24:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:33.023 00:15:33.023 real 0m22.819s 00:15:33.023 user 0m23.759s 00:15:33.023 sys 0m6.409s 00:15:33.023 10:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:33.023 10:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:33.023 ************************************ 00:15:33.023 END TEST nvmf_ns_masking 00:15:33.023 ************************************ 00:15:33.023 10:24:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:33.023 10:24:17 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:15:33.023 10:24:17 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:33.023 10:24:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:33.023 10:24:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:33.023 10:24:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:33.023 ************************************ 00:15:33.023 START TEST nvmf_nvme_cli 00:15:33.023 ************************************ 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:33.023 * Looking for test storage... 00:15:33.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:15:33.023 10:24:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:38.301 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:38.301 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:38.301 Found net devices under 0000:86:00.0: cvl_0_0 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:38.301 Found net devices under 0000:86:00.1: cvl_0_1 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:38.301 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:38.561 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:38.561 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:38.561 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:38.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:38.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:15:38.561 00:15:38.561 --- 10.0.0.2 ping statistics --- 00:15:38.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.561 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:15:38.561 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:38.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:38.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:15:38.561 00:15:38.561 --- 10.0.0.1 ping statistics --- 00:15:38.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.561 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:15:38.561 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:38.561 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:38.561 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:38.561 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:38.561 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:38.561 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:38.561 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:38.561 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:38.561 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:38.561 10:24:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:38.561 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:38.561 10:24:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:38.561 10:24:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:38.561 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2344419 00:15:38.561 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2344419 00:15:38.561 10:24:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:38.561 10:24:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 2344419 ']' 00:15:38.561 10:24:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.561 10:24:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:38.561 10:24:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.561 10:24:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:38.561 10:24:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:38.561 [2024-07-14 10:24:23.429523] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:38.561 [2024-07-14 10:24:23.429571] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.561 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.561 [2024-07-14 10:24:23.503003] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:38.820 [2024-07-14 10:24:23.545763] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.820 [2024-07-14 10:24:23.545802] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.820 [2024-07-14 10:24:23.545809] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.820 [2024-07-14 10:24:23.545816] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.820 [2024-07-14 10:24:23.545821] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.820 [2024-07-14 10:24:23.545881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.820 [2024-07-14 10:24:23.545992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.820 [2024-07-14 10:24:23.546098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.820 [2024-07-14 10:24:23.546099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:39.388 10:24:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:39.389 [2024-07-14 10:24:24.290411] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:39.389 Malloc0 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:39.389 Malloc1 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.389 10:24:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:39.389 [2024-07-14 10:24:24.371394] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:39.648 10:24:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.648 10:24:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:39.648 10:24:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.648 10:24:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:39.648 10:24:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.648 10:24:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:15:39.648 00:15:39.648 Discovery Log Number of Records 2, Generation counter 2 00:15:39.648 =====Discovery Log Entry 0====== 00:15:39.648 trtype: tcp 00:15:39.648 adrfam: ipv4 00:15:39.648 subtype: current discovery subsystem 00:15:39.648 treq: not required 00:15:39.648 portid: 0 00:15:39.648 trsvcid: 4420 00:15:39.648 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:39.648 traddr: 10.0.0.2 00:15:39.648 eflags: explicit discovery connections, duplicate discovery information 00:15:39.648 sectype: none 00:15:39.648 =====Discovery Log Entry 1====== 00:15:39.648 trtype: tcp 00:15:39.648 adrfam: ipv4 00:15:39.648 subtype: nvme subsystem 00:15:39.648 treq: not required 00:15:39.648 portid: 0 00:15:39.648 trsvcid: 4420 00:15:39.648 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:39.648 traddr: 10.0.0.2 00:15:39.648 eflags: none 00:15:39.648 sectype: none 00:15:39.648 10:24:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:39.648 10:24:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:39.648 10:24:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:39.648 10:24:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:39.648 10:24:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:39.648 10:24:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:39.648 10:24:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:39.648 10:24:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:39.648 10:24:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:39.648 10:24:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:39.648 10:24:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:41.026 10:24:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:41.026 10:24:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:15:41.026 10:24:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:41.026 10:24:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:41.026 10:24:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:41.026 10:24:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:42.941 /dev/nvme0n1 ]] 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:42.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:42.941 rmmod nvme_tcp 00:15:42.941 rmmod nvme_fabrics 00:15:42.941 rmmod nvme_keyring 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2344419 ']' 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2344419 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 2344419 ']' 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 2344419 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2344419 00:15:42.941 10:24:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:42.942 10:24:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:42.942 10:24:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2344419' 00:15:42.942 killing process with pid 2344419 00:15:42.942 10:24:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 2344419 00:15:42.942 10:24:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 2344419 00:15:43.200 10:24:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:43.200 10:24:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:43.200 10:24:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:43.200 10:24:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:43.200 10:24:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:43.200 10:24:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.200 10:24:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.200 10:24:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.803 10:24:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:45.803 00:15:45.803 real 0m12.715s 00:15:45.803 user 0m19.885s 00:15:45.803 sys 0m4.916s 00:15:45.803 10:24:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:45.803 10:24:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:45.803 ************************************ 00:15:45.803 END TEST nvmf_nvme_cli 00:15:45.803 ************************************ 00:15:45.803 10:24:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:45.803 10:24:30 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:15:45.803 10:24:30 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:45.803 10:24:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:45.803 10:24:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:45.803 10:24:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:45.803 ************************************ 00:15:45.803 START TEST nvmf_vfio_user 00:15:45.803 ************************************ 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:45.803 * Looking for test storage... 00:15:45.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2345691 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2345691' 00:15:45.803 Process pid: 2345691 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2345691 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2345691 ']' 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:45.803 10:24:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:45.803 [2024-07-14 10:24:30.440022] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:45.803 [2024-07-14 10:24:30.440071] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:45.803 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.803 [2024-07-14 10:24:30.508066] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:45.803 [2024-07-14 10:24:30.549684] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:45.803 [2024-07-14 10:24:30.549726] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:45.803 [2024-07-14 10:24:30.549734] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:45.803 [2024-07-14 10:24:30.549740] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:45.803 [2024-07-14 10:24:30.549745] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:45.803 [2024-07-14 10:24:30.550175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:45.803 [2024-07-14 10:24:30.550281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:45.803 [2024-07-14 10:24:30.550316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.803 [2024-07-14 10:24:30.550317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:45.804 10:24:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:45.804 10:24:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:15:45.804 10:24:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:46.740 10:24:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:46.999 10:24:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:46.999 10:24:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:46.999 10:24:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:46.999 10:24:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:46.999 10:24:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:47.258 Malloc1 00:15:47.258 10:24:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:47.258 10:24:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:47.517 10:24:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:47.776 10:24:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:47.776 10:24:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:47.776 10:24:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:48.035 Malloc2 00:15:48.035 10:24:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:48.035 10:24:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:48.293 10:24:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:48.553 10:24:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:48.553 10:24:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:48.553 10:24:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:48.553 10:24:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:48.553 10:24:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:48.553 10:24:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:48.553 [2024-07-14 10:24:33.360714] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:48.553 [2024-07-14 10:24:33.360751] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2346238 ] 00:15:48.553 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.553 [2024-07-14 10:24:33.391746] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:48.553 [2024-07-14 10:24:33.399484] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:48.553 [2024-07-14 10:24:33.399505] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa427b13000 00:15:48.553 [2024-07-14 10:24:33.400483] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:48.553 [2024-07-14 10:24:33.401483] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:48.553 [2024-07-14 10:24:33.402493] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:48.553 [2024-07-14 10:24:33.403497] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:48.553 [2024-07-14 10:24:33.404505] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:48.553 [2024-07-14 10:24:33.405508] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:48.553 [2024-07-14 10:24:33.406510] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:48.553 [2024-07-14 10:24:33.407516] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:48.553 [2024-07-14 10:24:33.408527] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:48.553 [2024-07-14 10:24:33.408540] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa4268d9000 00:15:48.553 [2024-07-14 10:24:33.409486] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:48.553 [2024-07-14 10:24:33.420087] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:48.553 [2024-07-14 10:24:33.420116] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:48.553 [2024-07-14 10:24:33.424619] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:48.553 [2024-07-14 10:24:33.424655] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:48.553 [2024-07-14 10:24:33.424729] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:48.553 [2024-07-14 10:24:33.424747] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:48.553 [2024-07-14 10:24:33.424752] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:48.553 [2024-07-14 10:24:33.425618] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:48.553 [2024-07-14 10:24:33.425628] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:48.554 [2024-07-14 10:24:33.425634] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:48.554 [2024-07-14 10:24:33.426616] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:48.554 [2024-07-14 10:24:33.426623] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:48.554 [2024-07-14 10:24:33.426630] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:48.554 [2024-07-14 10:24:33.427620] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:48.554 [2024-07-14 10:24:33.427628] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:48.554 [2024-07-14 10:24:33.428629] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:48.554 [2024-07-14 10:24:33.428637] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:48.554 [2024-07-14 10:24:33.428642] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:48.554 [2024-07-14 10:24:33.428647] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:48.554 [2024-07-14 10:24:33.428752] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:48.554 [2024-07-14 10:24:33.428757] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:48.554 [2024-07-14 10:24:33.428762] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:48.554 [2024-07-14 10:24:33.429638] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:48.554 [2024-07-14 10:24:33.430639] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:48.554 [2024-07-14 10:24:33.431647] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:48.554 [2024-07-14 10:24:33.432641] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:48.554 [2024-07-14 10:24:33.432700] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:48.554 [2024-07-14 10:24:33.433648] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:48.554 [2024-07-14 10:24:33.433657] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:48.554 [2024-07-14 10:24:33.433661] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:48.554 [2024-07-14 10:24:33.433678] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:48.554 [2024-07-14 10:24:33.433685] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:48.554 [2024-07-14 10:24:33.433700] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:48.554 [2024-07-14 10:24:33.433705] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:48.554 [2024-07-14 10:24:33.433717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:48.554 [2024-07-14 10:24:33.433753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:48.554 [2024-07-14 10:24:33.433762] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:48.554 [2024-07-14 10:24:33.433768] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:48.554 [2024-07-14 10:24:33.433772] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:48.554 [2024-07-14 10:24:33.433777] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:48.554 [2024-07-14 10:24:33.433781] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:48.554 [2024-07-14 10:24:33.433785] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:48.554 [2024-07-14 10:24:33.433789] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:48.554 [2024-07-14 10:24:33.433796] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:48.554 [2024-07-14 10:24:33.433805] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:48.554 [2024-07-14 10:24:33.433817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:48.554 [2024-07-14 10:24:33.433829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:48.554 [2024-07-14 10:24:33.433836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:48.554 [2024-07-14 10:24:33.433847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:48.554 [2024-07-14 10:24:33.433855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:48.554 [2024-07-14 10:24:33.433859] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:48.554 [2024-07-14 10:24:33.433867] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:48.554 [2024-07-14 10:24:33.433875] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:48.554 [2024-07-14 10:24:33.433883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:48.554 [2024-07-14 10:24:33.433888] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:48.554 [2024-07-14 10:24:33.433893] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:48.554 [2024-07-14 10:24:33.433899] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:48.554 [2024-07-14 10:24:33.433904] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:48.554 [2024-07-14 10:24:33.433912] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:48.554 [2024-07-14 10:24:33.433925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:48.554 [2024-07-14 10:24:33.433973] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:48.554 [2024-07-14 10:24:33.433980] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:48.554 [2024-07-14 10:24:33.433987] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:48.554 [2024-07-14 10:24:33.433991] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:48.554 [2024-07-14 10:24:33.433996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:48.554 [2024-07-14 10:24:33.434009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:48.554 [2024-07-14 10:24:33.434017] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:48.554 [2024-07-14 10:24:33.434029] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:48.554 [2024-07-14 10:24:33.434036] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:48.554 [2024-07-14 10:24:33.434042] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:48.554 [2024-07-14 10:24:33.434046] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:48.554 [2024-07-14 10:24:33.434052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:48.554 [2024-07-14 10:24:33.434069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:48.554 [2024-07-14 10:24:33.434082] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:48.554 [2024-07-14 10:24:33.434089] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:48.554 [2024-07-14 10:24:33.434095] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:48.554 [2024-07-14 10:24:33.434099] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:48.554 [2024-07-14 10:24:33.434104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:48.554 [2024-07-14 10:24:33.434119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:48.554 [2024-07-14 10:24:33.434126] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:48.554 [2024-07-14 10:24:33.434132] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:48.554 [2024-07-14 10:24:33.434139] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:48.554 [2024-07-14 10:24:33.434144] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:48.554 [2024-07-14 10:24:33.434149] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:48.554 [2024-07-14 10:24:33.434154] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:48.554 [2024-07-14 10:24:33.434158] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:48.554 [2024-07-14 10:24:33.434162] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:48.554 [2024-07-14 10:24:33.434167] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:48.554 [2024-07-14 10:24:33.434183] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:48.554 [2024-07-14 10:24:33.434193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:48.554 [2024-07-14 10:24:33.434203] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:48.554 [2024-07-14 10:24:33.434210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:48.554 [2024-07-14 10:24:33.434219] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:48.554 [2024-07-14 10:24:33.434232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:48.554 [2024-07-14 10:24:33.434242] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:48.554 [2024-07-14 10:24:33.434252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:48.554 [2024-07-14 10:24:33.434264] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:48.554 [2024-07-14 10:24:33.434268] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:48.554 [2024-07-14 10:24:33.434271] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:48.554 [2024-07-14 10:24:33.434276] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:48.554 [2024-07-14 10:24:33.434282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:48.554 [2024-07-14 10:24:33.434288] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:48.554 [2024-07-14 10:24:33.434292] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:48.554 [2024-07-14 10:24:33.434297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:48.554 [2024-07-14 10:24:33.434303] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:48.554 [2024-07-14 10:24:33.434307] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:48.554 [2024-07-14 10:24:33.434313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:48.554 [2024-07-14 10:24:33.434319] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:48.554 [2024-07-14 10:24:33.434323] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:48.554 [2024-07-14 10:24:33.434328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:48.554 [2024-07-14 10:24:33.434335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:48.554 [2024-07-14 10:24:33.434345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:48.554 [2024-07-14 10:24:33.434355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:48.554 [2024-07-14 10:24:33.434361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:48.554 ===================================================== 00:15:48.554 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:48.554 ===================================================== 00:15:48.554 Controller Capabilities/Features 00:15:48.554 ================================ 00:15:48.554 Vendor ID: 4e58 00:15:48.554 Subsystem Vendor ID: 4e58 00:15:48.554 Serial Number: SPDK1 00:15:48.554 Model Number: SPDK bdev Controller 00:15:48.554 Firmware Version: 24.09 00:15:48.554 Recommended Arb Burst: 6 00:15:48.554 IEEE OUI Identifier: 8d 6b 50 00:15:48.554 Multi-path I/O 00:15:48.554 May have multiple subsystem ports: Yes 00:15:48.554 May have multiple controllers: Yes 00:15:48.554 Associated with SR-IOV VF: No 00:15:48.554 Max Data Transfer Size: 131072 00:15:48.554 Max Number of Namespaces: 32 00:15:48.554 Max Number of I/O Queues: 127 00:15:48.554 NVMe Specification Version (VS): 1.3 00:15:48.554 NVMe Specification Version (Identify): 1.3 00:15:48.554 Maximum Queue Entries: 256 00:15:48.554 Contiguous Queues Required: Yes 00:15:48.554 Arbitration Mechanisms Supported 00:15:48.554 Weighted Round Robin: Not Supported 00:15:48.554 Vendor Specific: Not Supported 00:15:48.554 Reset Timeout: 15000 ms 00:15:48.554 Doorbell Stride: 4 bytes 00:15:48.554 NVM Subsystem Reset: Not Supported 00:15:48.554 Command Sets Supported 00:15:48.554 NVM Command Set: Supported 00:15:48.554 Boot Partition: Not Supported 00:15:48.554 Memory Page Size Minimum: 4096 bytes 00:15:48.554 Memory Page Size Maximum: 4096 bytes 00:15:48.554 Persistent Memory Region: Not Supported 00:15:48.554 Optional Asynchronous Events Supported 00:15:48.554 Namespace Attribute Notices: Supported 00:15:48.554 Firmware Activation Notices: Not Supported 00:15:48.554 ANA Change Notices: Not Supported 00:15:48.554 PLE Aggregate Log Change Notices: Not Supported 00:15:48.554 LBA Status Info Alert Notices: Not Supported 00:15:48.554 EGE Aggregate Log Change Notices: Not Supported 00:15:48.554 Normal NVM Subsystem Shutdown event: Not Supported 00:15:48.554 Zone Descriptor Change Notices: Not Supported 00:15:48.554 Discovery Log Change Notices: Not Supported 00:15:48.554 Controller Attributes 00:15:48.554 128-bit Host Identifier: Supported 00:15:48.554 Non-Operational Permissive Mode: Not Supported 00:15:48.554 NVM Sets: Not Supported 00:15:48.554 Read Recovery Levels: Not Supported 00:15:48.554 Endurance Groups: Not Supported 00:15:48.554 Predictable Latency Mode: Not Supported 00:15:48.554 Traffic Based Keep ALive: Not Supported 00:15:48.554 Namespace Granularity: Not Supported 00:15:48.554 SQ Associations: Not Supported 00:15:48.554 UUID List: Not Supported 00:15:48.554 Multi-Domain Subsystem: Not Supported 00:15:48.554 Fixed Capacity Management: Not Supported 00:15:48.554 Variable Capacity Management: Not Supported 00:15:48.554 Delete Endurance Group: Not Supported 00:15:48.554 Delete NVM Set: Not Supported 00:15:48.554 Extended LBA Formats Supported: Not Supported 00:15:48.554 Flexible Data Placement Supported: Not Supported 00:15:48.554 00:15:48.554 Controller Memory Buffer Support 00:15:48.554 ================================ 00:15:48.554 Supported: No 00:15:48.554 00:15:48.554 Persistent Memory Region Support 00:15:48.554 ================================ 00:15:48.554 Supported: No 00:15:48.554 00:15:48.554 Admin Command Set Attributes 00:15:48.554 ============================ 00:15:48.554 Security Send/Receive: Not Supported 00:15:48.554 Format NVM: Not Supported 00:15:48.554 Firmware Activate/Download: Not Supported 00:15:48.554 Namespace Management: Not Supported 00:15:48.554 Device Self-Test: Not Supported 00:15:48.554 Directives: Not Supported 00:15:48.554 NVMe-MI: Not Supported 00:15:48.554 Virtualization Management: Not Supported 00:15:48.554 Doorbell Buffer Config: Not Supported 00:15:48.554 Get LBA Status Capability: Not Supported 00:15:48.554 Command & Feature Lockdown Capability: Not Supported 00:15:48.554 Abort Command Limit: 4 00:15:48.554 Async Event Request Limit: 4 00:15:48.554 Number of Firmware Slots: N/A 00:15:48.554 Firmware Slot 1 Read-Only: N/A 00:15:48.554 Firmware Activation Without Reset: N/A 00:15:48.555 Multiple Update Detection Support: N/A 00:15:48.555 Firmware Update Granularity: No Information Provided 00:15:48.555 Per-Namespace SMART Log: No 00:15:48.555 Asymmetric Namespace Access Log Page: Not Supported 00:15:48.555 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:48.555 Command Effects Log Page: Supported 00:15:48.555 Get Log Page Extended Data: Supported 00:15:48.555 Telemetry Log Pages: Not Supported 00:15:48.555 Persistent Event Log Pages: Not Supported 00:15:48.555 Supported Log Pages Log Page: May Support 00:15:48.555 Commands Supported & Effects Log Page: Not Supported 00:15:48.555 Feature Identifiers & Effects Log Page:May Support 00:15:48.555 NVMe-MI Commands & Effects Log Page: May Support 00:15:48.555 Data Area 4 for Telemetry Log: Not Supported 00:15:48.555 Error Log Page Entries Supported: 128 00:15:48.555 Keep Alive: Supported 00:15:48.555 Keep Alive Granularity: 10000 ms 00:15:48.555 00:15:48.555 NVM Command Set Attributes 00:15:48.555 ========================== 00:15:48.555 Submission Queue Entry Size 00:15:48.555 Max: 64 00:15:48.555 Min: 64 00:15:48.555 Completion Queue Entry Size 00:15:48.555 Max: 16 00:15:48.555 Min: 16 00:15:48.555 Number of Namespaces: 32 00:15:48.555 Compare Command: Supported 00:15:48.555 Write Uncorrectable Command: Not Supported 00:15:48.555 Dataset Management Command: Supported 00:15:48.555 Write Zeroes Command: Supported 00:15:48.555 Set Features Save Field: Not Supported 00:15:48.555 Reservations: Not Supported 00:15:48.555 Timestamp: Not Supported 00:15:48.555 Copy: Supported 00:15:48.555 Volatile Write Cache: Present 00:15:48.555 Atomic Write Unit (Normal): 1 00:15:48.555 Atomic Write Unit (PFail): 1 00:15:48.555 Atomic Compare & Write Unit: 1 00:15:48.555 Fused Compare & Write: Supported 00:15:48.555 Scatter-Gather List 00:15:48.555 SGL Command Set: Supported (Dword aligned) 00:15:48.555 SGL Keyed: Not Supported 00:15:48.555 SGL Bit Bucket Descriptor: Not Supported 00:15:48.555 SGL Metadata Pointer: Not Supported 00:15:48.555 Oversized SGL: Not Supported 00:15:48.555 SGL Metadata Address: Not Supported 00:15:48.555 SGL Offset: Not Supported 00:15:48.555 Transport SGL Data Block: Not Supported 00:15:48.555 Replay Protected Memory Block: Not Supported 00:15:48.555 00:15:48.555 Firmware Slot Information 00:15:48.555 ========================= 00:15:48.555 Active slot: 1 00:15:48.555 Slot 1 Firmware Revision: 24.09 00:15:48.555 00:15:48.555 00:15:48.555 Commands Supported and Effects 00:15:48.555 ============================== 00:15:48.555 Admin Commands 00:15:48.555 -------------- 00:15:48.555 Get Log Page (02h): Supported 00:15:48.555 Identify (06h): Supported 00:15:48.555 Abort (08h): Supported 00:15:48.555 Set Features (09h): Supported 00:15:48.555 Get Features (0Ah): Supported 00:15:48.555 Asynchronous Event Request (0Ch): Supported 00:15:48.555 Keep Alive (18h): Supported 00:15:48.555 I/O Commands 00:15:48.555 ------------ 00:15:48.555 Flush (00h): Supported LBA-Change 00:15:48.555 Write (01h): Supported LBA-Change 00:15:48.555 Read (02h): Supported 00:15:48.555 Compare (05h): Supported 00:15:48.555 Write Zeroes (08h): Supported LBA-Change 00:15:48.555 Dataset Management (09h): Supported LBA-Change 00:15:48.555 Copy (19h): Supported LBA-Change 00:15:48.555 00:15:48.555 Error Log 00:15:48.555 ========= 00:15:48.555 00:15:48.555 Arbitration 00:15:48.555 =========== 00:15:48.555 Arbitration Burst: 1 00:15:48.555 00:15:48.555 Power Management 00:15:48.555 ================ 00:15:48.555 Number of Power States: 1 00:15:48.555 Current Power State: Power State #0 00:15:48.555 Power State #0: 00:15:48.555 Max Power: 0.00 W 00:15:48.555 Non-Operational State: Operational 00:15:48.555 Entry Latency: Not Reported 00:15:48.555 Exit Latency: Not Reported 00:15:48.555 Relative Read Throughput: 0 00:15:48.555 Relative Read Latency: 0 00:15:48.555 Relative Write Throughput: 0 00:15:48.555 Relative Write Latency: 0 00:15:48.555 Idle Power: Not Reported 00:15:48.555 Active Power: Not Reported 00:15:48.555 Non-Operational Permissive Mode: Not Supported 00:15:48.555 00:15:48.555 Health Information 00:15:48.555 ================== 00:15:48.555 Critical Warnings: 00:15:48.555 Available Spare Space: OK 00:15:48.555 Temperature: OK 00:15:48.555 Device Reliability: OK 00:15:48.555 Read Only: No 00:15:48.555 Volatile Memory Backup: OK 00:15:48.555 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:48.555 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:48.555 Available Spare: 0% 00:15:48.555 Available Sp[2024-07-14 10:24:33.434455] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:48.555 [2024-07-14 10:24:33.434466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:48.555 [2024-07-14 10:24:33.434495] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:48.555 [2024-07-14 10:24:33.434504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:48.555 [2024-07-14 10:24:33.434510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:48.555 [2024-07-14 10:24:33.434515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:48.555 [2024-07-14 10:24:33.434521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:48.555 [2024-07-14 10:24:33.434660] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:48.555 [2024-07-14 10:24:33.434668] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:48.555 [2024-07-14 10:24:33.435665] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:48.555 [2024-07-14 10:24:33.435715] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:48.555 [2024-07-14 10:24:33.435721] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:48.555 [2024-07-14 10:24:33.436668] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:48.555 [2024-07-14 10:24:33.436678] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:48.555 [2024-07-14 10:24:33.436725] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:48.555 [2024-07-14 10:24:33.442234] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:48.555 are Threshold: 0% 00:15:48.555 Life Percentage Used: 0% 00:15:48.555 Data Units Read: 0 00:15:48.555 Data Units Written: 0 00:15:48.555 Host Read Commands: 0 00:15:48.555 Host Write Commands: 0 00:15:48.555 Controller Busy Time: 0 minutes 00:15:48.555 Power Cycles: 0 00:15:48.555 Power On Hours: 0 hours 00:15:48.555 Unsafe Shutdowns: 0 00:15:48.555 Unrecoverable Media Errors: 0 00:15:48.555 Lifetime Error Log Entries: 0 00:15:48.555 Warning Temperature Time: 0 minutes 00:15:48.555 Critical Temperature Time: 0 minutes 00:15:48.555 00:15:48.555 Number of Queues 00:15:48.555 ================ 00:15:48.555 Number of I/O Submission Queues: 127 00:15:48.555 Number of I/O Completion Queues: 127 00:15:48.555 00:15:48.555 Active Namespaces 00:15:48.555 ================= 00:15:48.555 Namespace ID:1 00:15:48.555 Error Recovery Timeout: Unlimited 00:15:48.555 Command Set Identifier: NVM (00h) 00:15:48.555 Deallocate: Supported 00:15:48.555 Deallocated/Unwritten Error: Not Supported 00:15:48.555 Deallocated Read Value: Unknown 00:15:48.555 Deallocate in Write Zeroes: Not Supported 00:15:48.555 Deallocated Guard Field: 0xFFFF 00:15:48.555 Flush: Supported 00:15:48.555 Reservation: Supported 00:15:48.555 Namespace Sharing Capabilities: Multiple Controllers 00:15:48.555 Size (in LBAs): 131072 (0GiB) 00:15:48.555 Capacity (in LBAs): 131072 (0GiB) 00:15:48.555 Utilization (in LBAs): 131072 (0GiB) 00:15:48.555 NGUID: A4B04DDC63EC42F4BCC74EBE053EF838 00:15:48.555 UUID: a4b04ddc-63ec-42f4-bcc7-4ebe053ef838 00:15:48.555 Thin Provisioning: Not Supported 00:15:48.555 Per-NS Atomic Units: Yes 00:15:48.555 Atomic Boundary Size (Normal): 0 00:15:48.555 Atomic Boundary Size (PFail): 0 00:15:48.555 Atomic Boundary Offset: 0 00:15:48.555 Maximum Single Source Range Length: 65535 00:15:48.555 Maximum Copy Length: 65535 00:15:48.555 Maximum Source Range Count: 1 00:15:48.555 NGUID/EUI64 Never Reused: No 00:15:48.555 Namespace Write Protected: No 00:15:48.555 Number of LBA Formats: 1 00:15:48.555 Current LBA Format: LBA Format #00 00:15:48.555 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:48.555 00:15:48.555 10:24:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:48.555 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.813 [2024-07-14 10:24:33.658089] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:54.082 Initializing NVMe Controllers 00:15:54.082 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:54.082 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:54.082 Initialization complete. Launching workers. 00:15:54.082 ======================================================== 00:15:54.082 Latency(us) 00:15:54.082 Device Information : IOPS MiB/s Average min max 00:15:54.082 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39925.29 155.96 3205.59 964.96 7616.76 00:15:54.082 ======================================================== 00:15:54.082 Total : 39925.29 155.96 3205.59 964.96 7616.76 00:15:54.082 00:15:54.082 [2024-07-14 10:24:38.676545] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:54.082 10:24:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:54.082 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.082 [2024-07-14 10:24:38.901654] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:59.358 Initializing NVMe Controllers 00:15:59.358 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:59.358 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:59.358 Initialization complete. Launching workers. 00:15:59.358 ======================================================== 00:15:59.358 Latency(us) 00:15:59.358 Device Information : IOPS MiB/s Average min max 00:15:59.358 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16033.87 62.63 7988.48 5980.80 15452.18 00:15:59.358 ======================================================== 00:15:59.358 Total : 16033.87 62.63 7988.48 5980.80 15452.18 00:15:59.358 00:15:59.358 [2024-07-14 10:24:43.940497] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:59.358 10:24:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:59.358 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.358 [2024-07-14 10:24:44.126475] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:04.637 [2024-07-14 10:24:49.197504] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:04.637 Initializing NVMe Controllers 00:16:04.637 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:04.637 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:04.637 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:04.637 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:04.637 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:04.637 Initialization complete. Launching workers. 00:16:04.637 Starting thread on core 2 00:16:04.637 Starting thread on core 3 00:16:04.637 Starting thread on core 1 00:16:04.637 10:24:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:04.637 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.637 [2024-07-14 10:24:49.477659] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:07.928 [2024-07-14 10:24:52.535347] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:07.928 Initializing NVMe Controllers 00:16:07.928 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:07.928 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:07.928 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:07.928 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:07.928 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:07.928 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:07.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:07.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:07.928 Initialization complete. Launching workers. 00:16:07.928 Starting thread on core 1 with urgent priority queue 00:16:07.928 Starting thread on core 2 with urgent priority queue 00:16:07.928 Starting thread on core 3 with urgent priority queue 00:16:07.928 Starting thread on core 0 with urgent priority queue 00:16:07.928 SPDK bdev Controller (SPDK1 ) core 0: 9032.33 IO/s 11.07 secs/100000 ios 00:16:07.928 SPDK bdev Controller (SPDK1 ) core 1: 8178.00 IO/s 12.23 secs/100000 ios 00:16:07.928 SPDK bdev Controller (SPDK1 ) core 2: 7359.33 IO/s 13.59 secs/100000 ios 00:16:07.928 SPDK bdev Controller (SPDK1 ) core 3: 9210.00 IO/s 10.86 secs/100000 ios 00:16:07.928 ======================================================== 00:16:07.928 00:16:07.928 10:24:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:07.928 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.928 [2024-07-14 10:24:52.806649] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:07.928 Initializing NVMe Controllers 00:16:07.928 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:07.928 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:07.928 Namespace ID: 1 size: 0GB 00:16:07.928 Initialization complete. 00:16:07.928 INFO: using host memory buffer for IO 00:16:07.928 Hello world! 00:16:07.928 [2024-07-14 10:24:52.840857] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:07.928 10:24:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:08.188 EAL: No free 2048 kB hugepages reported on node 1 00:16:08.188 [2024-07-14 10:24:53.108584] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:09.576 Initializing NVMe Controllers 00:16:09.576 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:09.576 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:09.576 Initialization complete. Launching workers. 00:16:09.576 submit (in ns) avg, min, max = 6664.3, 3275.7, 4001061.7 00:16:09.576 complete (in ns) avg, min, max = 20580.9, 1791.3, 4000810.4 00:16:09.576 00:16:09.576 Submit histogram 00:16:09.576 ================ 00:16:09.576 Range in us Cumulative Count 00:16:09.576 3.270 - 3.283: 0.0243% ( 4) 00:16:09.576 3.283 - 3.297: 0.0608% ( 6) 00:16:09.576 3.297 - 3.311: 0.1521% ( 15) 00:16:09.576 3.311 - 3.325: 0.4441% ( 48) 00:16:09.576 3.325 - 3.339: 0.9185% ( 78) 00:16:09.576 3.339 - 3.353: 2.4211% ( 247) 00:16:09.576 3.353 - 3.367: 6.2534% ( 630) 00:16:09.576 3.367 - 3.381: 11.3206% ( 833) 00:16:09.576 3.381 - 3.395: 17.0327% ( 939) 00:16:09.576 3.395 - 3.409: 23.1948% ( 1013) 00:16:09.576 3.409 - 3.423: 29.3813% ( 1017) 00:16:09.576 3.423 - 3.437: 34.7527% ( 883) 00:16:09.576 3.437 - 3.450: 40.7507% ( 986) 00:16:09.576 3.450 - 3.464: 45.5198% ( 784) 00:16:09.576 3.464 - 3.478: 49.6989% ( 687) 00:16:09.576 3.478 - 3.492: 53.9631% ( 701) 00:16:09.576 3.492 - 3.506: 60.4295% ( 1063) 00:16:09.576 3.506 - 3.520: 66.3240% ( 969) 00:16:09.576 3.520 - 3.534: 70.5761% ( 699) 00:16:09.576 3.534 - 3.548: 76.0995% ( 908) 00:16:09.576 3.548 - 3.562: 80.5645% ( 734) 00:16:09.576 3.562 - 3.590: 85.7230% ( 848) 00:16:09.576 3.590 - 3.617: 87.4323% ( 281) 00:16:09.576 3.617 - 3.645: 88.1319% ( 115) 00:16:09.576 3.645 - 3.673: 89.3850% ( 206) 00:16:09.576 3.673 - 3.701: 90.9605% ( 259) 00:16:09.576 3.701 - 3.729: 92.8402% ( 309) 00:16:09.576 3.729 - 3.757: 94.6955% ( 305) 00:16:09.576 3.757 - 3.784: 96.3076% ( 265) 00:16:09.576 3.784 - 3.812: 97.5911% ( 211) 00:16:09.576 3.812 - 3.840: 98.4671% ( 144) 00:16:09.576 3.840 - 3.868: 99.0145% ( 90) 00:16:09.576 3.868 - 3.896: 99.3552% ( 56) 00:16:09.576 3.896 - 3.923: 99.5499% ( 32) 00:16:09.576 3.923 - 3.951: 99.5985% ( 8) 00:16:09.576 3.951 - 3.979: 99.6107% ( 2) 00:16:09.576 5.037 - 5.064: 99.6168% ( 1) 00:16:09.576 5.092 - 5.120: 99.6228% ( 1) 00:16:09.576 5.176 - 5.203: 99.6289% ( 1) 00:16:09.576 5.203 - 5.231: 99.6350% ( 1) 00:16:09.576 5.231 - 5.259: 99.6411% ( 1) 00:16:09.576 5.287 - 5.315: 99.6472% ( 1) 00:16:09.576 5.315 - 5.343: 99.6533% ( 1) 00:16:09.576 5.370 - 5.398: 99.6715% ( 3) 00:16:09.576 5.398 - 5.426: 99.6837% ( 2) 00:16:09.576 5.454 - 5.482: 99.6898% ( 1) 00:16:09.576 5.482 - 5.510: 99.7019% ( 2) 00:16:09.576 5.537 - 5.565: 99.7080% ( 1) 00:16:09.576 5.565 - 5.593: 99.7141% ( 1) 00:16:09.576 5.593 - 5.621: 99.7202% ( 1) 00:16:09.576 5.621 - 5.649: 99.7384% ( 3) 00:16:09.576 5.649 - 5.677: 99.7445% ( 1) 00:16:09.576 5.816 - 5.843: 99.7567% ( 2) 00:16:09.576 5.843 - 5.871: 99.7628% ( 1) 00:16:09.576 5.871 - 5.899: 99.7688% ( 1) 00:16:09.576 6.066 - 6.094: 99.7810% ( 2) 00:16:09.576 6.094 - 6.122: 99.7871% ( 1) 00:16:09.576 6.150 - 6.177: 99.7993% ( 2) 00:16:09.576 6.177 - 6.205: 99.8053% ( 1) 00:16:09.576 6.344 - 6.372: 99.8114% ( 1) 00:16:09.576 6.400 - 6.428: 99.8175% ( 1) 00:16:09.576 6.428 - 6.456: 99.8236% ( 1) 00:16:09.576 6.456 - 6.483: 99.8358% ( 2) 00:16:09.576 6.483 - 6.511: 99.8418% ( 1) 00:16:09.576 6.567 - 6.595: 99.8479% ( 1) 00:16:09.576 6.623 - 6.650: 99.8540% ( 1) 00:16:09.576 6.650 - 6.678: 99.8601% ( 1) 00:16:09.576 6.706 - 6.734: 99.8662% ( 1) 00:16:09.576 7.235 - 7.290: 99.8723% ( 1) 00:16:09.576 7.457 - 7.513: 99.8783% ( 1) 00:16:09.576 7.903 - 7.958: 99.8844% ( 1) 00:16:09.576 7.958 - 8.014: 99.8905% ( 1) 00:16:09.576 8.515 - 8.570: 99.8966% ( 1) 00:16:09.576 8.682 - 8.737: 99.9027% ( 1) 00:16:09.576 8.849 - 8.904: 99.9088% ( 1) 00:16:09.576 10.129 - 10.184: 99.9148% ( 1) 00:16:09.576 10.908 - 10.963: 99.9209% ( 1) 00:16:09.576 3989.148 - 4017.642: 100.0000% ( 13) 00:16:09.576 00:16:09.576 Complete histogram 00:16:09.576 ================== 00:16:09.576 Ra[2024-07-14 10:24:54.129748] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:09.576 nge in us Cumulative Count 00:16:09.576 1.781 - 1.795: 0.0122% ( 2) 00:16:09.576 1.823 - 1.837: 0.3467% ( 55) 00:16:09.576 1.837 - 1.850: 1.4539% ( 182) 00:16:09.576 1.850 - 1.864: 2.7739% ( 217) 00:16:09.576 1.864 - 1.878: 3.5769% ( 132) 00:16:09.576 1.878 - 1.892: 27.5138% ( 3935) 00:16:09.576 1.892 - 1.906: 81.0086% ( 8794) 00:16:09.576 1.906 - 1.920: 91.7452% ( 1765) 00:16:09.576 1.920 - 1.934: 94.5982% ( 469) 00:16:09.576 1.934 - 1.948: 95.7844% ( 195) 00:16:09.576 1.948 - 1.962: 96.7699% ( 162) 00:16:09.576 1.962 - 1.976: 98.1568% ( 228) 00:16:09.576 1.976 - 1.990: 98.9050% ( 123) 00:16:09.576 1.990 - 2.003: 99.1484% ( 40) 00:16:09.576 2.003 - 2.017: 99.2700% ( 20) 00:16:09.576 2.017 - 2.031: 99.3187% ( 8) 00:16:09.576 2.031 - 2.045: 99.3369% ( 3) 00:16:09.576 2.045 - 2.059: 99.3430% ( 1) 00:16:09.576 2.059 - 2.073: 99.3491% ( 1) 00:16:09.576 2.073 - 2.087: 99.3613% ( 2) 00:16:09.576 2.087 - 2.101: 99.3674% ( 1) 00:16:09.576 2.115 - 2.129: 99.3734% ( 1) 00:16:09.576 2.129 - 2.143: 99.3795% ( 1) 00:16:09.576 2.184 - 2.198: 99.3856% ( 1) 00:16:09.576 3.784 - 3.812: 99.3917% ( 1) 00:16:09.576 3.812 - 3.840: 99.3978% ( 1) 00:16:09.576 3.840 - 3.868: 99.4039% ( 1) 00:16:09.576 3.868 - 3.896: 99.4099% ( 1) 00:16:09.576 4.007 - 4.035: 99.4221% ( 2) 00:16:09.576 4.063 - 4.090: 99.4282% ( 1) 00:16:09.577 4.118 - 4.146: 99.4343% ( 1) 00:16:09.577 4.285 - 4.313: 99.4404% ( 1) 00:16:09.577 4.369 - 4.397: 99.4464% ( 1) 00:16:09.577 4.424 - 4.452: 99.4525% ( 1) 00:16:09.577 4.703 - 4.730: 99.4586% ( 1) 00:16:09.577 4.758 - 4.786: 99.4647% ( 1) 00:16:09.577 5.287 - 5.315: 99.4708% ( 1) 00:16:09.577 5.315 - 5.343: 99.4769% ( 1) 00:16:09.577 5.426 - 5.454: 99.4829% ( 1) 00:16:09.577 5.927 - 5.955: 99.4890% ( 1) 00:16:09.577 6.205 - 6.233: 99.4951% ( 1) 00:16:09.577 6.233 - 6.261: 99.5073% ( 2) 00:16:09.577 6.317 - 6.344: 99.5134% ( 1) 00:16:09.577 6.650 - 6.678: 99.5194% ( 1) 00:16:09.577 8.125 - 8.181: 99.5255% ( 1) 00:16:09.577 39.847 - 40.070: 99.5316% ( 1) 00:16:09.577 3447.763 - 3462.010: 99.5377% ( 1) 00:16:09.577 3989.148 - 4017.642: 100.0000% ( 76) 00:16:09.577 00:16:09.577 10:24:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:09.577 10:24:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:09.577 10:24:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:09.577 10:24:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:09.577 10:24:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:09.577 [ 00:16:09.577 { 00:16:09.577 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:09.577 "subtype": "Discovery", 00:16:09.577 "listen_addresses": [], 00:16:09.577 "allow_any_host": true, 00:16:09.577 "hosts": [] 00:16:09.577 }, 00:16:09.577 { 00:16:09.577 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:09.577 "subtype": "NVMe", 00:16:09.577 "listen_addresses": [ 00:16:09.577 { 00:16:09.577 "trtype": "VFIOUSER", 00:16:09.577 "adrfam": "IPv4", 00:16:09.577 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:09.577 "trsvcid": "0" 00:16:09.577 } 00:16:09.577 ], 00:16:09.577 "allow_any_host": true, 00:16:09.577 "hosts": [], 00:16:09.577 "serial_number": "SPDK1", 00:16:09.577 "model_number": "SPDK bdev Controller", 00:16:09.577 "max_namespaces": 32, 00:16:09.577 "min_cntlid": 1, 00:16:09.577 "max_cntlid": 65519, 00:16:09.577 "namespaces": [ 00:16:09.577 { 00:16:09.577 "nsid": 1, 00:16:09.577 "bdev_name": "Malloc1", 00:16:09.577 "name": "Malloc1", 00:16:09.577 "nguid": "A4B04DDC63EC42F4BCC74EBE053EF838", 00:16:09.577 "uuid": "a4b04ddc-63ec-42f4-bcc7-4ebe053ef838" 00:16:09.577 } 00:16:09.577 ] 00:16:09.577 }, 00:16:09.577 { 00:16:09.577 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:09.577 "subtype": "NVMe", 00:16:09.577 "listen_addresses": [ 00:16:09.577 { 00:16:09.577 "trtype": "VFIOUSER", 00:16:09.577 "adrfam": "IPv4", 00:16:09.577 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:09.577 "trsvcid": "0" 00:16:09.577 } 00:16:09.577 ], 00:16:09.577 "allow_any_host": true, 00:16:09.577 "hosts": [], 00:16:09.577 "serial_number": "SPDK2", 00:16:09.577 "model_number": "SPDK bdev Controller", 00:16:09.577 "max_namespaces": 32, 00:16:09.577 "min_cntlid": 1, 00:16:09.577 "max_cntlid": 65519, 00:16:09.577 "namespaces": [ 00:16:09.577 { 00:16:09.577 "nsid": 1, 00:16:09.577 "bdev_name": "Malloc2", 00:16:09.577 "name": "Malloc2", 00:16:09.577 "nguid": "E473B2F30345480389A3EC287FE0326E", 00:16:09.577 "uuid": "e473b2f3-0345-4803-89a3-ec287fe0326e" 00:16:09.577 } 00:16:09.577 ] 00:16:09.577 } 00:16:09.577 ] 00:16:09.577 10:24:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:09.577 10:24:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2349734 00:16:09.577 10:24:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:09.577 10:24:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:09.577 10:24:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:16:09.577 10:24:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:09.577 10:24:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:09.577 10:24:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:16:09.577 10:24:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:09.577 10:24:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:09.577 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.577 [2024-07-14 10:24:54.504620] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:09.900 Malloc3 00:16:09.900 10:24:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:09.900 [2024-07-14 10:24:54.739379] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:09.900 10:24:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:09.900 Asynchronous Event Request test 00:16:09.900 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:09.900 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:09.900 Registering asynchronous event callbacks... 00:16:09.900 Starting namespace attribute notice tests for all controllers... 00:16:09.900 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:09.900 aer_cb - Changed Namespace 00:16:09.900 Cleaning up... 00:16:10.159 [ 00:16:10.159 { 00:16:10.159 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:10.159 "subtype": "Discovery", 00:16:10.159 "listen_addresses": [], 00:16:10.159 "allow_any_host": true, 00:16:10.159 "hosts": [] 00:16:10.159 }, 00:16:10.159 { 00:16:10.159 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:10.159 "subtype": "NVMe", 00:16:10.159 "listen_addresses": [ 00:16:10.159 { 00:16:10.159 "trtype": "VFIOUSER", 00:16:10.159 "adrfam": "IPv4", 00:16:10.159 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:10.159 "trsvcid": "0" 00:16:10.159 } 00:16:10.159 ], 00:16:10.159 "allow_any_host": true, 00:16:10.159 "hosts": [], 00:16:10.159 "serial_number": "SPDK1", 00:16:10.159 "model_number": "SPDK bdev Controller", 00:16:10.159 "max_namespaces": 32, 00:16:10.159 "min_cntlid": 1, 00:16:10.159 "max_cntlid": 65519, 00:16:10.159 "namespaces": [ 00:16:10.159 { 00:16:10.159 "nsid": 1, 00:16:10.159 "bdev_name": "Malloc1", 00:16:10.159 "name": "Malloc1", 00:16:10.159 "nguid": "A4B04DDC63EC42F4BCC74EBE053EF838", 00:16:10.159 "uuid": "a4b04ddc-63ec-42f4-bcc7-4ebe053ef838" 00:16:10.159 }, 00:16:10.159 { 00:16:10.159 "nsid": 2, 00:16:10.159 "bdev_name": "Malloc3", 00:16:10.159 "name": "Malloc3", 00:16:10.159 "nguid": "728177F7C31346AD9947622DD7C164E1", 00:16:10.159 "uuid": "728177f7-c313-46ad-9947-622dd7c164e1" 00:16:10.159 } 00:16:10.159 ] 00:16:10.159 }, 00:16:10.159 { 00:16:10.159 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:10.159 "subtype": "NVMe", 00:16:10.159 "listen_addresses": [ 00:16:10.159 { 00:16:10.159 "trtype": "VFIOUSER", 00:16:10.159 "adrfam": "IPv4", 00:16:10.159 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:10.159 "trsvcid": "0" 00:16:10.159 } 00:16:10.159 ], 00:16:10.159 "allow_any_host": true, 00:16:10.159 "hosts": [], 00:16:10.159 "serial_number": "SPDK2", 00:16:10.159 "model_number": "SPDK bdev Controller", 00:16:10.159 "max_namespaces": 32, 00:16:10.159 "min_cntlid": 1, 00:16:10.159 "max_cntlid": 65519, 00:16:10.159 "namespaces": [ 00:16:10.159 { 00:16:10.159 "nsid": 1, 00:16:10.159 "bdev_name": "Malloc2", 00:16:10.159 "name": "Malloc2", 00:16:10.159 "nguid": "E473B2F30345480389A3EC287FE0326E", 00:16:10.159 "uuid": "e473b2f3-0345-4803-89a3-ec287fe0326e" 00:16:10.159 } 00:16:10.159 ] 00:16:10.159 } 00:16:10.159 ] 00:16:10.159 10:24:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2349734 00:16:10.159 10:24:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:10.159 10:24:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:10.159 10:24:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:10.159 10:24:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:10.159 [2024-07-14 10:24:54.976675] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:16:10.159 [2024-07-14 10:24:54.976718] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2349866 ] 00:16:10.159 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.159 [2024-07-14 10:24:55.007620] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:10.159 [2024-07-14 10:24:55.017488] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:10.159 [2024-07-14 10:24:55.017511] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9a571e0000 00:16:10.159 [2024-07-14 10:24:55.018491] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:10.159 [2024-07-14 10:24:55.019495] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:10.159 [2024-07-14 10:24:55.020502] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:10.160 [2024-07-14 10:24:55.021511] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:10.160 [2024-07-14 10:24:55.022521] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:10.160 [2024-07-14 10:24:55.023528] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:10.160 [2024-07-14 10:24:55.024540] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:10.160 [2024-07-14 10:24:55.025547] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:10.160 [2024-07-14 10:24:55.026559] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:10.160 [2024-07-14 10:24:55.026569] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9a55fa6000 00:16:10.160 [2024-07-14 10:24:55.027509] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:10.160 [2024-07-14 10:24:55.036026] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:10.160 [2024-07-14 10:24:55.036050] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:16:10.160 [2024-07-14 10:24:55.041136] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:10.160 [2024-07-14 10:24:55.041172] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:10.160 [2024-07-14 10:24:55.041240] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:16:10.160 [2024-07-14 10:24:55.041256] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:16:10.160 [2024-07-14 10:24:55.041261] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:16:10.160 [2024-07-14 10:24:55.042142] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:10.160 [2024-07-14 10:24:55.042150] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:16:10.160 [2024-07-14 10:24:55.042156] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:16:10.160 [2024-07-14 10:24:55.043145] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:10.160 [2024-07-14 10:24:55.043153] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:16:10.160 [2024-07-14 10:24:55.043160] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:16:10.160 [2024-07-14 10:24:55.044156] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:10.160 [2024-07-14 10:24:55.044164] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:10.160 [2024-07-14 10:24:55.045167] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:10.160 [2024-07-14 10:24:55.045175] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:16:10.160 [2024-07-14 10:24:55.045182] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:16:10.160 [2024-07-14 10:24:55.045187] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:10.160 [2024-07-14 10:24:55.045292] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:16:10.160 [2024-07-14 10:24:55.045297] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:10.160 [2024-07-14 10:24:55.045301] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:10.160 [2024-07-14 10:24:55.046168] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:10.160 [2024-07-14 10:24:55.047183] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:10.160 [2024-07-14 10:24:55.048187] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:10.160 [2024-07-14 10:24:55.049194] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:10.160 [2024-07-14 10:24:55.049233] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:10.160 [2024-07-14 10:24:55.050201] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:10.160 [2024-07-14 10:24:55.050208] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:10.160 [2024-07-14 10:24:55.050213] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:16:10.160 [2024-07-14 10:24:55.050233] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:16:10.160 [2024-07-14 10:24:55.050240] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:16:10.160 [2024-07-14 10:24:55.050250] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:10.160 [2024-07-14 10:24:55.050254] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:10.160 [2024-07-14 10:24:55.050265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:10.160 [2024-07-14 10:24:55.058232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:10.160 [2024-07-14 10:24:55.058243] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:16:10.160 [2024-07-14 10:24:55.058250] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:16:10.160 [2024-07-14 10:24:55.058254] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:16:10.160 [2024-07-14 10:24:55.058258] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:10.160 [2024-07-14 10:24:55.058262] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:16:10.160 [2024-07-14 10:24:55.058266] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:16:10.160 [2024-07-14 10:24:55.058273] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:16:10.160 [2024-07-14 10:24:55.058280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:16:10.160 [2024-07-14 10:24:55.058289] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:10.160 [2024-07-14 10:24:55.066231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:10.160 [2024-07-14 10:24:55.066245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:10.160 [2024-07-14 10:24:55.066252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:10.160 [2024-07-14 10:24:55.066260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:10.160 [2024-07-14 10:24:55.066267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:10.160 [2024-07-14 10:24:55.066271] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:16:10.160 [2024-07-14 10:24:55.066279] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:10.160 [2024-07-14 10:24:55.066287] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:10.160 [2024-07-14 10:24:55.074228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:10.160 [2024-07-14 10:24:55.074235] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:16:10.160 [2024-07-14 10:24:55.074240] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:10.160 [2024-07-14 10:24:55.074246] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:16:10.160 [2024-07-14 10:24:55.074252] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:16:10.160 [2024-07-14 10:24:55.074260] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:10.160 [2024-07-14 10:24:55.082230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:10.160 [2024-07-14 10:24:55.082281] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:16:10.160 [2024-07-14 10:24:55.082289] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:16:10.160 [2024-07-14 10:24:55.082295] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:10.160 [2024-07-14 10:24:55.082299] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:10.160 [2024-07-14 10:24:55.082305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:10.160 [2024-07-14 10:24:55.090229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:10.160 [2024-07-14 10:24:55.090254] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:16:10.160 [2024-07-14 10:24:55.090263] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:16:10.160 [2024-07-14 10:24:55.090269] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:16:10.160 [2024-07-14 10:24:55.090276] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:10.160 [2024-07-14 10:24:55.090280] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:10.160 [2024-07-14 10:24:55.090286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:10.160 [2024-07-14 10:24:55.098229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:10.160 [2024-07-14 10:24:55.098241] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:10.160 [2024-07-14 10:24:55.098248] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:10.160 [2024-07-14 10:24:55.098254] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:10.160 [2024-07-14 10:24:55.098258] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:10.161 [2024-07-14 10:24:55.098264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:10.161 [2024-07-14 10:24:55.106230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:10.161 [2024-07-14 10:24:55.106239] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:10.161 [2024-07-14 10:24:55.106245] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:16:10.161 [2024-07-14 10:24:55.106251] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:16:10.161 [2024-07-14 10:24:55.106256] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:16:10.161 [2024-07-14 10:24:55.106261] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:10.161 [2024-07-14 10:24:55.106266] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:16:10.161 [2024-07-14 10:24:55.106270] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:16:10.161 [2024-07-14 10:24:55.106274] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:16:10.161 [2024-07-14 10:24:55.106278] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:16:10.161 [2024-07-14 10:24:55.106294] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:10.161 [2024-07-14 10:24:55.114229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:10.161 [2024-07-14 10:24:55.114241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:10.161 [2024-07-14 10:24:55.122228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:10.161 [2024-07-14 10:24:55.122244] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:10.161 [2024-07-14 10:24:55.130229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:10.161 [2024-07-14 10:24:55.130241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:10.161 [2024-07-14 10:24:55.138230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:10.161 [2024-07-14 10:24:55.138244] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:10.161 [2024-07-14 10:24:55.138249] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:10.161 [2024-07-14 10:24:55.138252] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:10.161 [2024-07-14 10:24:55.138255] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:10.161 [2024-07-14 10:24:55.138261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:10.161 [2024-07-14 10:24:55.138267] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:10.161 [2024-07-14 10:24:55.138271] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:10.161 [2024-07-14 10:24:55.138276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:10.161 [2024-07-14 10:24:55.138282] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:10.161 [2024-07-14 10:24:55.138286] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:10.161 [2024-07-14 10:24:55.138292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:10.161 [2024-07-14 10:24:55.138298] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:10.161 [2024-07-14 10:24:55.138302] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:10.161 [2024-07-14 10:24:55.138307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:10.421 [2024-07-14 10:24:55.146231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:10.421 [2024-07-14 10:24:55.146245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:10.421 [2024-07-14 10:24:55.146254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:10.421 [2024-07-14 10:24:55.146260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:10.421 ===================================================== 00:16:10.421 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:10.421 ===================================================== 00:16:10.421 Controller Capabilities/Features 00:16:10.421 ================================ 00:16:10.421 Vendor ID: 4e58 00:16:10.421 Subsystem Vendor ID: 4e58 00:16:10.421 Serial Number: SPDK2 00:16:10.421 Model Number: SPDK bdev Controller 00:16:10.421 Firmware Version: 24.09 00:16:10.421 Recommended Arb Burst: 6 00:16:10.421 IEEE OUI Identifier: 8d 6b 50 00:16:10.421 Multi-path I/O 00:16:10.421 May have multiple subsystem ports: Yes 00:16:10.421 May have multiple controllers: Yes 00:16:10.421 Associated with SR-IOV VF: No 00:16:10.421 Max Data Transfer Size: 131072 00:16:10.421 Max Number of Namespaces: 32 00:16:10.421 Max Number of I/O Queues: 127 00:16:10.421 NVMe Specification Version (VS): 1.3 00:16:10.421 NVMe Specification Version (Identify): 1.3 00:16:10.421 Maximum Queue Entries: 256 00:16:10.421 Contiguous Queues Required: Yes 00:16:10.421 Arbitration Mechanisms Supported 00:16:10.421 Weighted Round Robin: Not Supported 00:16:10.421 Vendor Specific: Not Supported 00:16:10.421 Reset Timeout: 15000 ms 00:16:10.421 Doorbell Stride: 4 bytes 00:16:10.421 NVM Subsystem Reset: Not Supported 00:16:10.421 Command Sets Supported 00:16:10.421 NVM Command Set: Supported 00:16:10.421 Boot Partition: Not Supported 00:16:10.421 Memory Page Size Minimum: 4096 bytes 00:16:10.421 Memory Page Size Maximum: 4096 bytes 00:16:10.421 Persistent Memory Region: Not Supported 00:16:10.421 Optional Asynchronous Events Supported 00:16:10.421 Namespace Attribute Notices: Supported 00:16:10.421 Firmware Activation Notices: Not Supported 00:16:10.421 ANA Change Notices: Not Supported 00:16:10.421 PLE Aggregate Log Change Notices: Not Supported 00:16:10.421 LBA Status Info Alert Notices: Not Supported 00:16:10.421 EGE Aggregate Log Change Notices: Not Supported 00:16:10.421 Normal NVM Subsystem Shutdown event: Not Supported 00:16:10.421 Zone Descriptor Change Notices: Not Supported 00:16:10.421 Discovery Log Change Notices: Not Supported 00:16:10.421 Controller Attributes 00:16:10.421 128-bit Host Identifier: Supported 00:16:10.421 Non-Operational Permissive Mode: Not Supported 00:16:10.421 NVM Sets: Not Supported 00:16:10.421 Read Recovery Levels: Not Supported 00:16:10.421 Endurance Groups: Not Supported 00:16:10.421 Predictable Latency Mode: Not Supported 00:16:10.421 Traffic Based Keep ALive: Not Supported 00:16:10.421 Namespace Granularity: Not Supported 00:16:10.421 SQ Associations: Not Supported 00:16:10.421 UUID List: Not Supported 00:16:10.421 Multi-Domain Subsystem: Not Supported 00:16:10.421 Fixed Capacity Management: Not Supported 00:16:10.421 Variable Capacity Management: Not Supported 00:16:10.421 Delete Endurance Group: Not Supported 00:16:10.421 Delete NVM Set: Not Supported 00:16:10.421 Extended LBA Formats Supported: Not Supported 00:16:10.421 Flexible Data Placement Supported: Not Supported 00:16:10.421 00:16:10.421 Controller Memory Buffer Support 00:16:10.421 ================================ 00:16:10.421 Supported: No 00:16:10.421 00:16:10.421 Persistent Memory Region Support 00:16:10.421 ================================ 00:16:10.421 Supported: No 00:16:10.421 00:16:10.421 Admin Command Set Attributes 00:16:10.421 ============================ 00:16:10.421 Security Send/Receive: Not Supported 00:16:10.421 Format NVM: Not Supported 00:16:10.421 Firmware Activate/Download: Not Supported 00:16:10.421 Namespace Management: Not Supported 00:16:10.421 Device Self-Test: Not Supported 00:16:10.421 Directives: Not Supported 00:16:10.421 NVMe-MI: Not Supported 00:16:10.421 Virtualization Management: Not Supported 00:16:10.421 Doorbell Buffer Config: Not Supported 00:16:10.421 Get LBA Status Capability: Not Supported 00:16:10.421 Command & Feature Lockdown Capability: Not Supported 00:16:10.421 Abort Command Limit: 4 00:16:10.421 Async Event Request Limit: 4 00:16:10.421 Number of Firmware Slots: N/A 00:16:10.421 Firmware Slot 1 Read-Only: N/A 00:16:10.421 Firmware Activation Without Reset: N/A 00:16:10.421 Multiple Update Detection Support: N/A 00:16:10.421 Firmware Update Granularity: No Information Provided 00:16:10.421 Per-Namespace SMART Log: No 00:16:10.421 Asymmetric Namespace Access Log Page: Not Supported 00:16:10.421 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:10.421 Command Effects Log Page: Supported 00:16:10.421 Get Log Page Extended Data: Supported 00:16:10.421 Telemetry Log Pages: Not Supported 00:16:10.421 Persistent Event Log Pages: Not Supported 00:16:10.421 Supported Log Pages Log Page: May Support 00:16:10.421 Commands Supported & Effects Log Page: Not Supported 00:16:10.421 Feature Identifiers & Effects Log Page:May Support 00:16:10.421 NVMe-MI Commands & Effects Log Page: May Support 00:16:10.421 Data Area 4 for Telemetry Log: Not Supported 00:16:10.421 Error Log Page Entries Supported: 128 00:16:10.421 Keep Alive: Supported 00:16:10.421 Keep Alive Granularity: 10000 ms 00:16:10.421 00:16:10.421 NVM Command Set Attributes 00:16:10.421 ========================== 00:16:10.421 Submission Queue Entry Size 00:16:10.421 Max: 64 00:16:10.421 Min: 64 00:16:10.421 Completion Queue Entry Size 00:16:10.421 Max: 16 00:16:10.421 Min: 16 00:16:10.421 Number of Namespaces: 32 00:16:10.421 Compare Command: Supported 00:16:10.421 Write Uncorrectable Command: Not Supported 00:16:10.421 Dataset Management Command: Supported 00:16:10.421 Write Zeroes Command: Supported 00:16:10.421 Set Features Save Field: Not Supported 00:16:10.421 Reservations: Not Supported 00:16:10.421 Timestamp: Not Supported 00:16:10.421 Copy: Supported 00:16:10.421 Volatile Write Cache: Present 00:16:10.421 Atomic Write Unit (Normal): 1 00:16:10.421 Atomic Write Unit (PFail): 1 00:16:10.421 Atomic Compare & Write Unit: 1 00:16:10.421 Fused Compare & Write: Supported 00:16:10.421 Scatter-Gather List 00:16:10.421 SGL Command Set: Supported (Dword aligned) 00:16:10.421 SGL Keyed: Not Supported 00:16:10.421 SGL Bit Bucket Descriptor: Not Supported 00:16:10.421 SGL Metadata Pointer: Not Supported 00:16:10.421 Oversized SGL: Not Supported 00:16:10.421 SGL Metadata Address: Not Supported 00:16:10.421 SGL Offset: Not Supported 00:16:10.421 Transport SGL Data Block: Not Supported 00:16:10.421 Replay Protected Memory Block: Not Supported 00:16:10.421 00:16:10.421 Firmware Slot Information 00:16:10.421 ========================= 00:16:10.421 Active slot: 1 00:16:10.421 Slot 1 Firmware Revision: 24.09 00:16:10.421 00:16:10.421 00:16:10.421 Commands Supported and Effects 00:16:10.421 ============================== 00:16:10.421 Admin Commands 00:16:10.421 -------------- 00:16:10.421 Get Log Page (02h): Supported 00:16:10.421 Identify (06h): Supported 00:16:10.421 Abort (08h): Supported 00:16:10.421 Set Features (09h): Supported 00:16:10.421 Get Features (0Ah): Supported 00:16:10.422 Asynchronous Event Request (0Ch): Supported 00:16:10.422 Keep Alive (18h): Supported 00:16:10.422 I/O Commands 00:16:10.422 ------------ 00:16:10.422 Flush (00h): Supported LBA-Change 00:16:10.422 Write (01h): Supported LBA-Change 00:16:10.422 Read (02h): Supported 00:16:10.422 Compare (05h): Supported 00:16:10.422 Write Zeroes (08h): Supported LBA-Change 00:16:10.422 Dataset Management (09h): Supported LBA-Change 00:16:10.422 Copy (19h): Supported LBA-Change 00:16:10.422 00:16:10.422 Error Log 00:16:10.422 ========= 00:16:10.422 00:16:10.422 Arbitration 00:16:10.422 =========== 00:16:10.422 Arbitration Burst: 1 00:16:10.422 00:16:10.422 Power Management 00:16:10.422 ================ 00:16:10.422 Number of Power States: 1 00:16:10.422 Current Power State: Power State #0 00:16:10.422 Power State #0: 00:16:10.422 Max Power: 0.00 W 00:16:10.422 Non-Operational State: Operational 00:16:10.422 Entry Latency: Not Reported 00:16:10.422 Exit Latency: Not Reported 00:16:10.422 Relative Read Throughput: 0 00:16:10.422 Relative Read Latency: 0 00:16:10.422 Relative Write Throughput: 0 00:16:10.422 Relative Write Latency: 0 00:16:10.422 Idle Power: Not Reported 00:16:10.422 Active Power: Not Reported 00:16:10.422 Non-Operational Permissive Mode: Not Supported 00:16:10.422 00:16:10.422 Health Information 00:16:10.422 ================== 00:16:10.422 Critical Warnings: 00:16:10.422 Available Spare Space: OK 00:16:10.422 Temperature: OK 00:16:10.422 Device Reliability: OK 00:16:10.422 Read Only: No 00:16:10.422 Volatile Memory Backup: OK 00:16:10.422 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:10.422 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:10.422 Available Spare: 0% 00:16:10.422 Available Sp[2024-07-14 10:24:55.146347] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:10.422 [2024-07-14 10:24:55.154231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:10.422 [2024-07-14 10:24:55.154260] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:16:10.422 [2024-07-14 10:24:55.154268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.422 [2024-07-14 10:24:55.154275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.422 [2024-07-14 10:24:55.154280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.422 [2024-07-14 10:24:55.154289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.422 [2024-07-14 10:24:55.154339] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:10.422 [2024-07-14 10:24:55.154349] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:10.422 [2024-07-14 10:24:55.155345] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:10.422 [2024-07-14 10:24:55.155387] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:16:10.422 [2024-07-14 10:24:55.155393] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:16:10.422 [2024-07-14 10:24:55.156348] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:10.422 [2024-07-14 10:24:55.156358] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:16:10.422 [2024-07-14 10:24:55.156403] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:10.422 [2024-07-14 10:24:55.157377] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:10.422 are Threshold: 0% 00:16:10.422 Life Percentage Used: 0% 00:16:10.422 Data Units Read: 0 00:16:10.422 Data Units Written: 0 00:16:10.422 Host Read Commands: 0 00:16:10.422 Host Write Commands: 0 00:16:10.422 Controller Busy Time: 0 minutes 00:16:10.422 Power Cycles: 0 00:16:10.422 Power On Hours: 0 hours 00:16:10.422 Unsafe Shutdowns: 0 00:16:10.422 Unrecoverable Media Errors: 0 00:16:10.422 Lifetime Error Log Entries: 0 00:16:10.422 Warning Temperature Time: 0 minutes 00:16:10.422 Critical Temperature Time: 0 minutes 00:16:10.422 00:16:10.422 Number of Queues 00:16:10.422 ================ 00:16:10.422 Number of I/O Submission Queues: 127 00:16:10.422 Number of I/O Completion Queues: 127 00:16:10.422 00:16:10.422 Active Namespaces 00:16:10.422 ================= 00:16:10.422 Namespace ID:1 00:16:10.422 Error Recovery Timeout: Unlimited 00:16:10.422 Command Set Identifier: NVM (00h) 00:16:10.422 Deallocate: Supported 00:16:10.422 Deallocated/Unwritten Error: Not Supported 00:16:10.422 Deallocated Read Value: Unknown 00:16:10.422 Deallocate in Write Zeroes: Not Supported 00:16:10.422 Deallocated Guard Field: 0xFFFF 00:16:10.422 Flush: Supported 00:16:10.422 Reservation: Supported 00:16:10.422 Namespace Sharing Capabilities: Multiple Controllers 00:16:10.422 Size (in LBAs): 131072 (0GiB) 00:16:10.422 Capacity (in LBAs): 131072 (0GiB) 00:16:10.422 Utilization (in LBAs): 131072 (0GiB) 00:16:10.422 NGUID: E473B2F30345480389A3EC287FE0326E 00:16:10.422 UUID: e473b2f3-0345-4803-89a3-ec287fe0326e 00:16:10.422 Thin Provisioning: Not Supported 00:16:10.422 Per-NS Atomic Units: Yes 00:16:10.422 Atomic Boundary Size (Normal): 0 00:16:10.422 Atomic Boundary Size (PFail): 0 00:16:10.422 Atomic Boundary Offset: 0 00:16:10.422 Maximum Single Source Range Length: 65535 00:16:10.422 Maximum Copy Length: 65535 00:16:10.422 Maximum Source Range Count: 1 00:16:10.422 NGUID/EUI64 Never Reused: No 00:16:10.422 Namespace Write Protected: No 00:16:10.422 Number of LBA Formats: 1 00:16:10.422 Current LBA Format: LBA Format #00 00:16:10.422 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:10.422 00:16:10.422 10:24:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:10.422 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.422 [2024-07-14 10:24:55.369569] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:15.696 Initializing NVMe Controllers 00:16:15.696 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:15.696 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:15.696 Initialization complete. Launching workers. 00:16:15.696 ======================================================== 00:16:15.696 Latency(us) 00:16:15.696 Device Information : IOPS MiB/s Average min max 00:16:15.696 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39928.21 155.97 3205.58 958.84 6785.95 00:16:15.696 ======================================================== 00:16:15.696 Total : 39928.21 155.97 3205.58 958.84 6785.95 00:16:15.696 00:16:15.696 [2024-07-14 10:25:00.479484] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:15.696 10:25:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:15.696 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.955 [2024-07-14 10:25:00.694101] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:21.226 Initializing NVMe Controllers 00:16:21.226 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:21.226 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:21.226 Initialization complete. Launching workers. 00:16:21.226 ======================================================== 00:16:21.226 Latency(us) 00:16:21.226 Device Information : IOPS MiB/s Average min max 00:16:21.226 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39928.36 155.97 3205.34 1001.16 7179.32 00:16:21.226 ======================================================== 00:16:21.226 Total : 39928.36 155.97 3205.34 1001.16 7179.32 00:16:21.226 00:16:21.226 [2024-07-14 10:25:05.713377] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:21.226 10:25:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:21.226 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.226 [2024-07-14 10:25:05.904767] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:26.497 [2024-07-14 10:25:11.033336] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:26.497 Initializing NVMe Controllers 00:16:26.497 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:26.497 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:26.497 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:26.497 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:26.497 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:26.497 Initialization complete. Launching workers. 00:16:26.497 Starting thread on core 2 00:16:26.497 Starting thread on core 3 00:16:26.497 Starting thread on core 1 00:16:26.497 10:25:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:26.497 EAL: No free 2048 kB hugepages reported on node 1 00:16:26.497 [2024-07-14 10:25:11.309620] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:29.786 [2024-07-14 10:25:14.380046] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:29.786 Initializing NVMe Controllers 00:16:29.786 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:29.786 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:29.786 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:29.786 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:29.786 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:29.786 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:29.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:29.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:29.786 Initialization complete. Launching workers. 00:16:29.786 Starting thread on core 1 with urgent priority queue 00:16:29.786 Starting thread on core 2 with urgent priority queue 00:16:29.786 Starting thread on core 3 with urgent priority queue 00:16:29.786 Starting thread on core 0 with urgent priority queue 00:16:29.786 SPDK bdev Controller (SPDK2 ) core 0: 8833.33 IO/s 11.32 secs/100000 ios 00:16:29.786 SPDK bdev Controller (SPDK2 ) core 1: 9308.00 IO/s 10.74 secs/100000 ios 00:16:29.786 SPDK bdev Controller (SPDK2 ) core 2: 9974.33 IO/s 10.03 secs/100000 ios 00:16:29.786 SPDK bdev Controller (SPDK2 ) core 3: 9722.33 IO/s 10.29 secs/100000 ios 00:16:29.786 ======================================================== 00:16:29.786 00:16:29.786 10:25:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:29.786 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.786 [2024-07-14 10:25:14.641834] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:29.786 Initializing NVMe Controllers 00:16:29.786 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:29.786 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:29.786 Namespace ID: 1 size: 0GB 00:16:29.786 Initialization complete. 00:16:29.786 INFO: using host memory buffer for IO 00:16:29.786 Hello world! 00:16:29.786 [2024-07-14 10:25:14.651903] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:29.786 10:25:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:29.786 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.045 [2024-07-14 10:25:14.913128] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:31.427 Initializing NVMe Controllers 00:16:31.427 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:31.427 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:31.427 Initialization complete. Launching workers. 00:16:31.427 submit (in ns) avg, min, max = 5711.4, 3272.2, 3999104.3 00:16:31.427 complete (in ns) avg, min, max = 20691.6, 1773.0, 3999380.0 00:16:31.427 00:16:31.427 Submit histogram 00:16:31.427 ================ 00:16:31.427 Range in us Cumulative Count 00:16:31.427 3.270 - 3.283: 0.0184% ( 3) 00:16:31.427 3.283 - 3.297: 0.0490% ( 5) 00:16:31.427 3.297 - 3.311: 0.2329% ( 30) 00:16:31.427 3.311 - 3.325: 0.5946% ( 59) 00:16:31.427 3.325 - 3.339: 1.0544% ( 75) 00:16:31.427 3.339 - 3.353: 2.4030% ( 220) 00:16:31.427 3.353 - 3.367: 6.3262% ( 640) 00:16:31.427 3.367 - 3.381: 11.7820% ( 890) 00:16:31.427 3.381 - 3.395: 17.9550% ( 1007) 00:16:31.427 3.395 - 3.409: 23.9073% ( 971) 00:16:31.427 3.409 - 3.423: 30.1232% ( 1014) 00:16:31.427 3.423 - 3.437: 34.9905% ( 794) 00:16:31.427 3.437 - 3.450: 40.0785% ( 830) 00:16:31.427 3.450 - 3.464: 45.2277% ( 840) 00:16:31.427 3.464 - 3.478: 49.1878% ( 646) 00:16:31.427 3.478 - 3.492: 52.7677% ( 584) 00:16:31.427 3.492 - 3.506: 58.3768% ( 915) 00:16:31.427 3.506 - 3.520: 65.8309% ( 1216) 00:16:31.427 3.520 - 3.534: 70.3243% ( 733) 00:16:31.427 3.534 - 3.548: 74.9831% ( 760) 00:16:31.427 3.548 - 3.562: 80.1018% ( 835) 00:16:31.427 3.562 - 3.590: 86.2073% ( 996) 00:16:31.427 3.590 - 3.617: 87.7766% ( 256) 00:16:31.427 3.617 - 3.645: 88.4632% ( 112) 00:16:31.427 3.645 - 3.673: 89.8486% ( 226) 00:16:31.427 3.673 - 3.701: 91.4363% ( 259) 00:16:31.427 3.701 - 3.729: 93.1037% ( 272) 00:16:31.427 3.729 - 3.757: 94.6546% ( 253) 00:16:31.427 3.757 - 3.784: 96.3281% ( 273) 00:16:31.427 3.784 - 3.812: 97.6031% ( 208) 00:16:31.427 3.812 - 3.840: 98.4981% ( 146) 00:16:31.427 3.840 - 3.868: 99.0621% ( 92) 00:16:31.427 3.868 - 3.896: 99.3931% ( 54) 00:16:31.427 3.896 - 3.923: 99.5525% ( 26) 00:16:31.427 3.923 - 3.951: 99.6015% ( 8) 00:16:31.427 3.951 - 3.979: 99.6199% ( 3) 00:16:31.427 3.979 - 4.007: 99.6322% ( 2) 00:16:31.427 4.007 - 4.035: 99.6383% ( 1) 00:16:31.427 4.090 - 4.118: 99.6445% ( 1) 00:16:31.427 5.259 - 5.287: 99.6506% ( 1) 00:16:31.427 5.454 - 5.482: 99.6567% ( 1) 00:16:31.427 5.482 - 5.510: 99.6628% ( 1) 00:16:31.427 5.510 - 5.537: 99.6751% ( 2) 00:16:31.427 5.565 - 5.593: 99.6812% ( 1) 00:16:31.427 5.593 - 5.621: 99.6874% ( 1) 00:16:31.427 5.677 - 5.704: 99.6935% ( 1) 00:16:31.427 5.732 - 5.760: 99.6996% ( 1) 00:16:31.427 6.317 - 6.344: 99.7058% ( 1) 00:16:31.427 6.400 - 6.428: 99.7180% ( 2) 00:16:31.427 6.567 - 6.595: 99.7241% ( 1) 00:16:31.427 6.650 - 6.678: 99.7303% ( 1) 00:16:31.427 6.734 - 6.762: 99.7364% ( 1) 00:16:31.427 6.762 - 6.790: 99.7425% ( 1) 00:16:31.427 6.873 - 6.901: 99.7548% ( 2) 00:16:31.427 6.901 - 6.929: 99.7671% ( 2) 00:16:31.427 6.929 - 6.957: 99.7732% ( 1) 00:16:31.427 6.957 - 6.984: 99.7854% ( 2) 00:16:31.427 7.123 - 7.179: 99.8038% ( 3) 00:16:31.427 7.179 - 7.235: 99.8222% ( 3) 00:16:31.427 7.235 - 7.290: 99.8284% ( 1) 00:16:31.427 7.346 - 7.402: 99.8406% ( 2) 00:16:31.427 7.457 - 7.513: 99.8467% ( 1) 00:16:31.427 7.513 - 7.569: 99.8590% ( 2) 00:16:31.427 7.569 - 7.624: 99.8651% ( 1) 00:16:31.427 7.624 - 7.680: 99.8713% ( 1) 00:16:31.427 7.680 - 7.736: 99.8774% ( 1) 00:16:31.427 7.791 - 7.847: 99.8897% ( 2) 00:16:31.427 7.958 - 8.014: 99.8958% ( 1) 00:16:31.427 8.070 - 8.125: 99.9142% ( 3) 00:16:31.427 8.125 - 8.181: 99.9203% ( 1) 00:16:31.427 8.237 - 8.292: 99.9264% ( 1) 00:16:31.427 9.071 - 9.127: 99.9326% ( 1) 00:16:31.427 9.183 - 9.238: 99.9448% ( 2) 00:16:31.427 3989.148 - 4017.642: 100.0000% ( 9) 00:16:31.427 00:16:31.427 Complete histogram 00:16:31.427 ================== 00:16:31.427 Range in us Cumulative Count 00:16:31.427 1.767 - 1.774: 0.0123% ( 2) 00:16:31.427 1.774 - 1.781: 0.0674% ( 9) 00:16:31.427 1.781 - 1.795: 0.2759% ( 34) 00:16:31.427 1.795 - [2024-07-14 10:25:16.007288] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:31.427 1.809: 0.3433% ( 11) 00:16:31.427 1.809 - 1.823: 0.5149% ( 28) 00:16:31.427 1.823 - 1.837: 17.4217% ( 2758) 00:16:31.427 1.837 - 1.850: 61.2456% ( 7149) 00:16:31.427 1.850 - 1.864: 72.0162% ( 1757) 00:16:31.427 1.864 - 1.878: 77.5026% ( 895) 00:16:31.427 1.878 - 1.892: 86.3115% ( 1437) 00:16:31.427 1.892 - 1.906: 94.0354% ( 1260) 00:16:31.427 1.906 - 1.920: 96.5181% ( 405) 00:16:31.427 1.920 - 1.934: 97.9342% ( 231) 00:16:31.427 1.934 - 1.948: 98.3755% ( 72) 00:16:31.427 1.948 - 1.962: 98.7433% ( 60) 00:16:31.427 1.962 - 1.976: 98.9395% ( 32) 00:16:31.428 1.976 - 1.990: 99.0560% ( 19) 00:16:31.428 1.990 - 2.003: 99.0927% ( 6) 00:16:31.428 2.003 - 2.017: 99.1357% ( 7) 00:16:31.428 2.017 - 2.031: 99.1602% ( 4) 00:16:31.428 2.031 - 2.045: 99.1908% ( 5) 00:16:31.428 2.045 - 2.059: 99.2153% ( 4) 00:16:31.428 2.059 - 2.073: 99.2337% ( 3) 00:16:31.428 2.073 - 2.087: 99.2767% ( 7) 00:16:31.428 2.087 - 2.101: 99.3073% ( 5) 00:16:31.428 2.101 - 2.115: 99.3134% ( 1) 00:16:31.428 2.115 - 2.129: 99.3196% ( 1) 00:16:31.428 2.379 - 2.393: 99.3257% ( 1) 00:16:31.428 3.812 - 3.840: 99.3318% ( 1) 00:16:31.428 3.840 - 3.868: 99.3380% ( 1) 00:16:31.428 4.090 - 4.118: 99.3441% ( 1) 00:16:31.428 4.174 - 4.202: 99.3502% ( 1) 00:16:31.428 4.202 - 4.230: 99.3563% ( 1) 00:16:31.428 4.257 - 4.285: 99.3747% ( 3) 00:16:31.428 4.369 - 4.397: 99.3809% ( 1) 00:16:31.428 4.703 - 4.730: 99.3870% ( 1) 00:16:31.428 4.953 - 4.981: 99.3931% ( 1) 00:16:31.428 5.120 - 5.148: 99.3993% ( 1) 00:16:31.428 5.203 - 5.231: 99.4054% ( 1) 00:16:31.428 5.454 - 5.482: 99.4115% ( 1) 00:16:31.428 5.537 - 5.565: 99.4176% ( 1) 00:16:31.428 5.593 - 5.621: 99.4299% ( 2) 00:16:31.428 5.732 - 5.760: 99.4360% ( 1) 00:16:31.428 5.871 - 5.899: 99.4422% ( 1) 00:16:31.428 5.899 - 5.927: 99.4483% ( 1) 00:16:31.428 6.038 - 6.066: 99.4544% ( 1) 00:16:31.428 6.066 - 6.094: 99.4606% ( 1) 00:16:31.428 6.094 - 6.122: 99.4728% ( 2) 00:16:31.428 6.150 - 6.177: 99.4789% ( 1) 00:16:31.428 6.205 - 6.233: 99.4851% ( 1) 00:16:31.428 6.233 - 6.261: 99.4912% ( 1) 00:16:31.428 6.344 - 6.372: 99.4973% ( 1) 00:16:31.428 6.428 - 6.456: 99.5035% ( 1) 00:16:31.428 6.539 - 6.567: 99.5096% ( 1) 00:16:31.428 6.817 - 6.845: 99.5157% ( 1) 00:16:31.428 6.845 - 6.873: 99.5219% ( 1) 00:16:31.428 7.903 - 7.958: 99.5280% ( 1) 00:16:31.428 3533.245 - 3547.492: 99.5341% ( 1) 00:16:31.428 3989.148 - 4017.642: 100.0000% ( 76) 00:16:31.428 00:16:31.428 10:25:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:31.428 10:25:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:31.428 10:25:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:31.428 10:25:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:31.428 10:25:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:31.428 [ 00:16:31.428 { 00:16:31.428 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:31.428 "subtype": "Discovery", 00:16:31.428 "listen_addresses": [], 00:16:31.428 "allow_any_host": true, 00:16:31.428 "hosts": [] 00:16:31.428 }, 00:16:31.428 { 00:16:31.428 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:31.428 "subtype": "NVMe", 00:16:31.428 "listen_addresses": [ 00:16:31.428 { 00:16:31.428 "trtype": "VFIOUSER", 00:16:31.428 "adrfam": "IPv4", 00:16:31.428 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:31.428 "trsvcid": "0" 00:16:31.428 } 00:16:31.428 ], 00:16:31.428 "allow_any_host": true, 00:16:31.428 "hosts": [], 00:16:31.428 "serial_number": "SPDK1", 00:16:31.428 "model_number": "SPDK bdev Controller", 00:16:31.428 "max_namespaces": 32, 00:16:31.428 "min_cntlid": 1, 00:16:31.428 "max_cntlid": 65519, 00:16:31.428 "namespaces": [ 00:16:31.428 { 00:16:31.428 "nsid": 1, 00:16:31.428 "bdev_name": "Malloc1", 00:16:31.428 "name": "Malloc1", 00:16:31.428 "nguid": "A4B04DDC63EC42F4BCC74EBE053EF838", 00:16:31.428 "uuid": "a4b04ddc-63ec-42f4-bcc7-4ebe053ef838" 00:16:31.428 }, 00:16:31.428 { 00:16:31.428 "nsid": 2, 00:16:31.428 "bdev_name": "Malloc3", 00:16:31.428 "name": "Malloc3", 00:16:31.428 "nguid": "728177F7C31346AD9947622DD7C164E1", 00:16:31.428 "uuid": "728177f7-c313-46ad-9947-622dd7c164e1" 00:16:31.428 } 00:16:31.428 ] 00:16:31.428 }, 00:16:31.428 { 00:16:31.428 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:31.428 "subtype": "NVMe", 00:16:31.428 "listen_addresses": [ 00:16:31.428 { 00:16:31.428 "trtype": "VFIOUSER", 00:16:31.428 "adrfam": "IPv4", 00:16:31.428 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:31.428 "trsvcid": "0" 00:16:31.428 } 00:16:31.428 ], 00:16:31.428 "allow_any_host": true, 00:16:31.428 "hosts": [], 00:16:31.428 "serial_number": "SPDK2", 00:16:31.428 "model_number": "SPDK bdev Controller", 00:16:31.428 "max_namespaces": 32, 00:16:31.428 "min_cntlid": 1, 00:16:31.428 "max_cntlid": 65519, 00:16:31.428 "namespaces": [ 00:16:31.428 { 00:16:31.428 "nsid": 1, 00:16:31.428 "bdev_name": "Malloc2", 00:16:31.428 "name": "Malloc2", 00:16:31.428 "nguid": "E473B2F30345480389A3EC287FE0326E", 00:16:31.428 "uuid": "e473b2f3-0345-4803-89a3-ec287fe0326e" 00:16:31.428 } 00:16:31.428 ] 00:16:31.428 } 00:16:31.428 ] 00:16:31.428 10:25:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:31.428 10:25:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:31.428 10:25:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2353312 00:16:31.428 10:25:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:31.428 10:25:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:16:31.428 10:25:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:31.428 10:25:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:31.428 10:25:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:16:31.428 10:25:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:31.428 10:25:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:31.428 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.428 [2024-07-14 10:25:16.374636] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:31.687 Malloc4 00:16:31.687 10:25:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:31.687 [2024-07-14 10:25:16.601380] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:31.687 10:25:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:31.687 Asynchronous Event Request test 00:16:31.687 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:31.687 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:31.687 Registering asynchronous event callbacks... 00:16:31.687 Starting namespace attribute notice tests for all controllers... 00:16:31.687 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:31.687 aer_cb - Changed Namespace 00:16:31.687 Cleaning up... 00:16:31.946 [ 00:16:31.946 { 00:16:31.946 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:31.946 "subtype": "Discovery", 00:16:31.946 "listen_addresses": [], 00:16:31.946 "allow_any_host": true, 00:16:31.946 "hosts": [] 00:16:31.946 }, 00:16:31.946 { 00:16:31.946 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:31.946 "subtype": "NVMe", 00:16:31.946 "listen_addresses": [ 00:16:31.946 { 00:16:31.946 "trtype": "VFIOUSER", 00:16:31.946 "adrfam": "IPv4", 00:16:31.946 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:31.946 "trsvcid": "0" 00:16:31.946 } 00:16:31.946 ], 00:16:31.946 "allow_any_host": true, 00:16:31.946 "hosts": [], 00:16:31.946 "serial_number": "SPDK1", 00:16:31.946 "model_number": "SPDK bdev Controller", 00:16:31.946 "max_namespaces": 32, 00:16:31.946 "min_cntlid": 1, 00:16:31.946 "max_cntlid": 65519, 00:16:31.946 "namespaces": [ 00:16:31.946 { 00:16:31.946 "nsid": 1, 00:16:31.946 "bdev_name": "Malloc1", 00:16:31.946 "name": "Malloc1", 00:16:31.946 "nguid": "A4B04DDC63EC42F4BCC74EBE053EF838", 00:16:31.946 "uuid": "a4b04ddc-63ec-42f4-bcc7-4ebe053ef838" 00:16:31.946 }, 00:16:31.946 { 00:16:31.946 "nsid": 2, 00:16:31.946 "bdev_name": "Malloc3", 00:16:31.946 "name": "Malloc3", 00:16:31.946 "nguid": "728177F7C31346AD9947622DD7C164E1", 00:16:31.946 "uuid": "728177f7-c313-46ad-9947-622dd7c164e1" 00:16:31.946 } 00:16:31.946 ] 00:16:31.946 }, 00:16:31.946 { 00:16:31.946 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:31.946 "subtype": "NVMe", 00:16:31.946 "listen_addresses": [ 00:16:31.946 { 00:16:31.946 "trtype": "VFIOUSER", 00:16:31.946 "adrfam": "IPv4", 00:16:31.946 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:31.946 "trsvcid": "0" 00:16:31.946 } 00:16:31.946 ], 00:16:31.946 "allow_any_host": true, 00:16:31.946 "hosts": [], 00:16:31.946 "serial_number": "SPDK2", 00:16:31.946 "model_number": "SPDK bdev Controller", 00:16:31.946 "max_namespaces": 32, 00:16:31.946 "min_cntlid": 1, 00:16:31.946 "max_cntlid": 65519, 00:16:31.946 "namespaces": [ 00:16:31.946 { 00:16:31.946 "nsid": 1, 00:16:31.946 "bdev_name": "Malloc2", 00:16:31.946 "name": "Malloc2", 00:16:31.946 "nguid": "E473B2F30345480389A3EC287FE0326E", 00:16:31.946 "uuid": "e473b2f3-0345-4803-89a3-ec287fe0326e" 00:16:31.946 }, 00:16:31.946 { 00:16:31.946 "nsid": 2, 00:16:31.946 "bdev_name": "Malloc4", 00:16:31.946 "name": "Malloc4", 00:16:31.946 "nguid": "5136BDC5E26B46BD8A901C213E0F08C1", 00:16:31.946 "uuid": "5136bdc5-e26b-46bd-8a90-1c213e0f08c1" 00:16:31.946 } 00:16:31.946 ] 00:16:31.946 } 00:16:31.946 ] 00:16:31.946 10:25:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2353312 00:16:31.946 10:25:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:31.946 10:25:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2345691 00:16:31.946 10:25:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2345691 ']' 00:16:31.946 10:25:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2345691 00:16:31.946 10:25:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:16:31.946 10:25:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:31.946 10:25:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2345691 00:16:31.946 10:25:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:31.946 10:25:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:31.946 10:25:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2345691' 00:16:31.946 killing process with pid 2345691 00:16:31.946 10:25:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2345691 00:16:31.946 10:25:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2345691 00:16:32.205 10:25:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:32.205 10:25:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:32.205 10:25:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:32.205 10:25:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:32.205 10:25:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:32.205 10:25:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2353546 00:16:32.205 10:25:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2353546' 00:16:32.205 Process pid: 2353546 00:16:32.205 10:25:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:32.205 10:25:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:32.205 10:25:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2353546 00:16:32.205 10:25:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2353546 ']' 00:16:32.205 10:25:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.205 10:25:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:32.205 10:25:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.205 10:25:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:32.205 10:25:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:32.205 [2024-07-14 10:25:17.151388] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:32.205 [2024-07-14 10:25:17.152242] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:16:32.205 [2024-07-14 10:25:17.152281] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:32.205 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.464 [2024-07-14 10:25:17.217139] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:32.464 [2024-07-14 10:25:17.254054] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:32.464 [2024-07-14 10:25:17.254097] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:32.464 [2024-07-14 10:25:17.254104] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:32.464 [2024-07-14 10:25:17.254110] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:32.464 [2024-07-14 10:25:17.254115] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:32.464 [2024-07-14 10:25:17.254200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.464 [2024-07-14 10:25:17.254343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:32.464 [2024-07-14 10:25:17.254376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.464 [2024-07-14 10:25:17.254378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:32.464 [2024-07-14 10:25:17.328223] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:32.464 [2024-07-14 10:25:17.328645] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:32.464 [2024-07-14 10:25:17.328970] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:32.464 [2024-07-14 10:25:17.329433] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:32.464 [2024-07-14 10:25:17.329968] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:32.464 10:25:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:32.464 10:25:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:16:32.464 10:25:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:33.399 10:25:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:33.658 10:25:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:33.658 10:25:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:33.658 10:25:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:33.658 10:25:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:33.658 10:25:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:33.915 Malloc1 00:16:33.915 10:25:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:34.174 10:25:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:34.174 10:25:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:34.476 10:25:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:34.476 10:25:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:34.476 10:25:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:34.757 Malloc2 00:16:34.757 10:25:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:34.757 10:25:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:35.016 10:25:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:35.275 10:25:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:35.275 10:25:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2353546 00:16:35.275 10:25:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2353546 ']' 00:16:35.275 10:25:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2353546 00:16:35.275 10:25:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:16:35.275 10:25:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:35.275 10:25:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2353546 00:16:35.275 10:25:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:35.275 10:25:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:35.275 10:25:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2353546' 00:16:35.275 killing process with pid 2353546 00:16:35.275 10:25:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2353546 00:16:35.275 10:25:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2353546 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:35.535 00:16:35.535 real 0m50.058s 00:16:35.535 user 3m18.092s 00:16:35.535 sys 0m3.379s 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:35.535 ************************************ 00:16:35.535 END TEST nvmf_vfio_user 00:16:35.535 ************************************ 00:16:35.535 10:25:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:35.535 10:25:20 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:35.535 10:25:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:35.535 10:25:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:35.535 10:25:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:35.535 ************************************ 00:16:35.535 START TEST nvmf_vfio_user_nvme_compliance 00:16:35.535 ************************************ 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:35.535 * Looking for test storage... 00:16:35.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2354090 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2354090' 00:16:35.535 Process pid: 2354090 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2354090 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 2354090 ']' 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:35.535 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.794 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:35.794 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:35.794 [2024-07-14 10:25:20.559465] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:16:35.794 [2024-07-14 10:25:20.559506] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.794 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.794 [2024-07-14 10:25:20.626478] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:35.794 [2024-07-14 10:25:20.667765] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.794 [2024-07-14 10:25:20.667804] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.794 [2024-07-14 10:25:20.667810] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:35.794 [2024-07-14 10:25:20.667817] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:35.795 [2024-07-14 10:25:20.667822] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.795 [2024-07-14 10:25:20.667865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.795 [2024-07-14 10:25:20.667970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.795 [2024-07-14 10:25:20.667971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.795 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:35.795 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:16:35.795 10:25:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:37.172 10:25:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:37.172 10:25:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:37.172 10:25:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:37.172 10:25:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.172 10:25:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:37.172 10:25:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.172 10:25:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:37.172 10:25:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:37.172 10:25:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.172 10:25:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:37.172 malloc0 00:16:37.172 10:25:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.172 10:25:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:37.172 10:25:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.172 10:25:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:37.172 10:25:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.172 10:25:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:37.172 10:25:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.172 10:25:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:37.172 10:25:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.172 10:25:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:37.172 10:25:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.172 10:25:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:37.172 10:25:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.172 10:25:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:37.172 EAL: No free 2048 kB hugepages reported on node 1 00:16:37.172 00:16:37.172 00:16:37.172 CUnit - A unit testing framework for C - Version 2.1-3 00:16:37.172 http://cunit.sourceforge.net/ 00:16:37.172 00:16:37.172 00:16:37.172 Suite: nvme_compliance 00:16:37.172 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-14 10:25:21.975546] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:37.172 [2024-07-14 10:25:21.976891] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:37.172 [2024-07-14 10:25:21.976906] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:37.172 [2024-07-14 10:25:21.976912] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:37.172 [2024-07-14 10:25:21.978567] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:37.172 passed 00:16:37.172 Test: admin_identify_ctrlr_verify_fused ...[2024-07-14 10:25:22.059106] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:37.172 [2024-07-14 10:25:22.062133] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:37.172 passed 00:16:37.172 Test: admin_identify_ns ...[2024-07-14 10:25:22.141739] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:37.454 [2024-07-14 10:25:22.201247] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:37.454 [2024-07-14 10:25:22.209238] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:37.454 [2024-07-14 10:25:22.230339] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:37.454 passed 00:16:37.454 Test: admin_get_features_mandatory_features ...[2024-07-14 10:25:22.307734] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:37.454 [2024-07-14 10:25:22.310756] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:37.454 passed 00:16:37.454 Test: admin_get_features_optional_features ...[2024-07-14 10:25:22.390277] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:37.454 [2024-07-14 10:25:22.393292] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:37.454 passed 00:16:37.712 Test: admin_set_features_number_of_queues ...[2024-07-14 10:25:22.472172] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:37.713 [2024-07-14 10:25:22.578320] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:37.713 passed 00:16:37.713 Test: admin_get_log_page_mandatory_logs ...[2024-07-14 10:25:22.652475] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:37.713 [2024-07-14 10:25:22.655496] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:37.713 passed 00:16:37.971 Test: admin_get_log_page_with_lpo ...[2024-07-14 10:25:22.733671] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:37.971 [2024-07-14 10:25:22.805238] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:37.971 [2024-07-14 10:25:22.818313] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:37.971 passed 00:16:37.971 Test: fabric_property_get ...[2024-07-14 10:25:22.891493] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:37.971 [2024-07-14 10:25:22.892733] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:37.971 [2024-07-14 10:25:22.894510] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:37.971 passed 00:16:38.230 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-14 10:25:22.976016] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:38.230 [2024-07-14 10:25:22.977254] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:38.230 [2024-07-14 10:25:22.979032] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:38.230 passed 00:16:38.230 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-14 10:25:23.057767] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:38.230 [2024-07-14 10:25:23.142244] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:38.230 [2024-07-14 10:25:23.158234] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:38.230 [2024-07-14 10:25:23.163314] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:38.230 passed 00:16:38.487 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-14 10:25:23.238496] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:38.487 [2024-07-14 10:25:23.239732] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:38.487 [2024-07-14 10:25:23.241516] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:38.487 passed 00:16:38.487 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-14 10:25:23.319446] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:38.487 [2024-07-14 10:25:23.396231] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:38.488 [2024-07-14 10:25:23.420239] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:38.488 [2024-07-14 10:25:23.423273] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:38.488 passed 00:16:38.746 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-14 10:25:23.502292] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:38.746 [2024-07-14 10:25:23.503527] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:38.746 [2024-07-14 10:25:23.503550] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:38.746 [2024-07-14 10:25:23.505312] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:38.746 passed 00:16:38.746 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-14 10:25:23.583302] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:38.746 [2024-07-14 10:25:23.676233] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:38.746 [2024-07-14 10:25:23.684260] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:38.746 [2024-07-14 10:25:23.692234] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:38.746 [2024-07-14 10:25:23.700237] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:39.005 [2024-07-14 10:25:23.729312] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:39.005 passed 00:16:39.005 Test: admin_create_io_sq_verify_pc ...[2024-07-14 10:25:23.805488] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:39.005 [2024-07-14 10:25:23.822239] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:39.005 [2024-07-14 10:25:23.839649] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:39.005 passed 00:16:39.005 Test: admin_create_io_qp_max_qps ...[2024-07-14 10:25:23.920184] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:40.384 [2024-07-14 10:25:25.017238] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:40.644 [2024-07-14 10:25:25.396626] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:40.644 passed 00:16:40.644 Test: admin_create_io_sq_shared_cq ...[2024-07-14 10:25:25.474679] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:40.644 [2024-07-14 10:25:25.606232] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:40.903 [2024-07-14 10:25:25.643290] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:40.903 passed 00:16:40.903 00:16:40.903 Run Summary: Type Total Ran Passed Failed Inactive 00:16:40.903 suites 1 1 n/a 0 0 00:16:40.903 tests 18 18 18 0 0 00:16:40.903 asserts 360 360 360 0 n/a 00:16:40.903 00:16:40.903 Elapsed time = 1.508 seconds 00:16:40.903 10:25:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2354090 00:16:40.903 10:25:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 2354090 ']' 00:16:40.903 10:25:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 2354090 00:16:40.903 10:25:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:16:40.903 10:25:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:40.903 10:25:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2354090 00:16:40.903 10:25:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:40.903 10:25:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:40.903 10:25:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2354090' 00:16:40.903 killing process with pid 2354090 00:16:40.903 10:25:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 2354090 00:16:40.904 10:25:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 2354090 00:16:41.163 10:25:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:41.163 10:25:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:41.163 00:16:41.163 real 0m5.538s 00:16:41.163 user 0m15.652s 00:16:41.163 sys 0m0.430s 00:16:41.164 10:25:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:41.164 10:25:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:41.164 ************************************ 00:16:41.164 END TEST nvmf_vfio_user_nvme_compliance 00:16:41.164 ************************************ 00:16:41.164 10:25:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:41.164 10:25:25 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:41.164 10:25:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:41.164 10:25:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:41.164 10:25:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:41.164 ************************************ 00:16:41.164 START TEST nvmf_vfio_user_fuzz 00:16:41.164 ************************************ 00:16:41.164 10:25:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:41.164 * Looking for test storage... 00:16:41.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2355062 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2355062' 00:16:41.164 Process pid: 2355062 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2355062 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 2355062 ']' 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:41.164 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:41.423 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:41.423 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:16:41.423 10:25:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:42.801 10:25:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:42.801 10:25:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.801 10:25:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:42.801 10:25:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.801 10:25:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:42.801 10:25:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:42.801 10:25:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.801 10:25:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:42.801 malloc0 00:16:42.801 10:25:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.801 10:25:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:42.802 10:25:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.802 10:25:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:42.802 10:25:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.802 10:25:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:42.802 10:25:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.802 10:25:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:42.802 10:25:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.802 10:25:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:42.802 10:25:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.802 10:25:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:42.802 10:25:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.802 10:25:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:42.802 10:25:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:14.896 Fuzzing completed. Shutting down the fuzz application 00:17:14.896 00:17:14.896 Dumping successful admin opcodes: 00:17:14.896 8, 9, 10, 24, 00:17:14.896 Dumping successful io opcodes: 00:17:14.896 0, 00:17:14.896 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1038172, total successful commands: 4097, random_seed: 3320799552 00:17:14.896 NS: 0x200003a1ef00 admin qp, Total commands completed: 255542, total successful commands: 2063, random_seed: 975631680 00:17:14.896 10:25:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:14.896 10:25:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.896 10:25:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:14.896 10:25:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.896 10:25:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2355062 00:17:14.896 10:25:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 2355062 ']' 00:17:14.896 10:25:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 2355062 00:17:14.896 10:25:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:17:14.896 10:25:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:14.896 10:25:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2355062 00:17:14.896 10:25:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:14.896 10:25:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:14.896 10:25:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2355062' 00:17:14.896 killing process with pid 2355062 00:17:14.896 10:25:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 2355062 00:17:14.896 10:25:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 2355062 00:17:14.896 10:25:58 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:14.896 10:25:58 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:14.896 00:17:14.896 real 0m32.132s 00:17:14.896 user 0m30.176s 00:17:14.896 sys 0m31.315s 00:17:14.896 10:25:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:14.896 10:25:58 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:14.896 ************************************ 00:17:14.896 END TEST nvmf_vfio_user_fuzz 00:17:14.896 ************************************ 00:17:14.896 10:25:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:14.896 10:25:58 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:14.896 10:25:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:14.896 10:25:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:14.896 10:25:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:14.896 ************************************ 00:17:14.896 START TEST nvmf_host_management 00:17:14.896 ************************************ 00:17:14.896 10:25:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:14.896 * Looking for test storage... 00:17:14.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:14.896 10:25:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:14.896 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:17:14.896 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:14.896 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:14.896 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:14.896 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:14.896 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:14.896 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:14.896 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:14.896 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:14.896 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:14.896 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:14.896 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:17:14.897 10:25:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:19.122 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:19.122 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:19.122 Found net devices under 0000:86:00.0: cvl_0_0 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:19.122 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.123 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:19.123 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:19.123 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.123 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:19.123 Found net devices under 0000:86:00.1: cvl_0_1 00:17:19.123 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.123 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:19.123 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:17:19.123 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:19.123 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:19.123 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:19.123 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:19.123 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:19.123 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:19.123 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:19.123 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:19.123 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:19.123 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:19.123 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:19.123 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:19.123 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:19.123 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:19.123 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:19.123 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:19.123 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:19.123 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:19.123 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:19.123 10:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:19.123 10:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:19.123 10:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:19.123 10:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:19.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:19.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:17:19.123 00:17:19.123 --- 10.0.0.2 ping statistics --- 00:17:19.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.123 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:17:19.123 10:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:19.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:19.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:17:19.123 00:17:19.123 --- 10.0.0.1 ping statistics --- 00:17:19.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.123 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:17:19.123 10:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:19.123 10:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:17:19.123 10:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:19.123 10:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:19.123 10:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:19.123 10:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:19.123 10:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:19.123 10:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:19.123 10:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:19.123 10:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:17:19.123 10:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:17:19.383 10:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:19.383 10:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:19.383 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:19.383 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:19.383 10:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2363359 00:17:19.383 10:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2363359 00:17:19.383 10:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:19.383 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2363359 ']' 00:17:19.383 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.383 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:19.383 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.383 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:19.383 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:19.383 [2024-07-14 10:26:04.152992] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:19.383 [2024-07-14 10:26:04.153033] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:19.383 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.383 [2024-07-14 10:26:04.208677] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:19.383 [2024-07-14 10:26:04.251331] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:19.383 [2024-07-14 10:26:04.251369] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:19.383 [2024-07-14 10:26:04.251376] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:19.383 [2024-07-14 10:26:04.251383] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:19.383 [2024-07-14 10:26:04.251389] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:19.383 [2024-07-14 10:26:04.251445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:19.383 [2024-07-14 10:26:04.251553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:19.383 [2024-07-14 10:26:04.251662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.383 [2024-07-14 10:26:04.251663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:19.383 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:19.383 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:17:19.383 10:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:19.383 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:19.383 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:19.642 [2024-07-14 10:26:04.400365] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:19.642 Malloc0 00:17:19.642 [2024-07-14 10:26:04.460305] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2363478 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2363478 /var/tmp/bdevperf.sock 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2363478 ']' 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:19.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:19.642 10:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:19.642 { 00:17:19.642 "params": { 00:17:19.642 "name": "Nvme$subsystem", 00:17:19.642 "trtype": "$TEST_TRANSPORT", 00:17:19.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:19.642 "adrfam": "ipv4", 00:17:19.642 "trsvcid": "$NVMF_PORT", 00:17:19.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:19.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:19.642 "hdgst": ${hdgst:-false}, 00:17:19.642 "ddgst": ${ddgst:-false} 00:17:19.642 }, 00:17:19.642 "method": "bdev_nvme_attach_controller" 00:17:19.642 } 00:17:19.642 EOF 00:17:19.642 )") 00:17:19.643 10:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:17:19.643 10:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:17:19.643 10:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:17:19.643 10:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:19.643 "params": { 00:17:19.643 "name": "Nvme0", 00:17:19.643 "trtype": "tcp", 00:17:19.643 "traddr": "10.0.0.2", 00:17:19.643 "adrfam": "ipv4", 00:17:19.643 "trsvcid": "4420", 00:17:19.643 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:19.643 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:19.643 "hdgst": false, 00:17:19.643 "ddgst": false 00:17:19.643 }, 00:17:19.643 "method": "bdev_nvme_attach_controller" 00:17:19.643 }' 00:17:19.643 [2024-07-14 10:26:04.553484] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:19.643 [2024-07-14 10:26:04.553530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2363478 ] 00:17:19.643 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.643 [2024-07-14 10:26:04.608185] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.901 [2024-07-14 10:26:04.649000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.901 Running I/O for 10 seconds... 00:17:20.161 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:20.161 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:17:20.161 10:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:20.161 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.161 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:20.161 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.161 10:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:20.161 10:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:20.161 10:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:20.161 10:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:20.161 10:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:17:20.161 10:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:17:20.161 10:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:20.161 10:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:20.161 10:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:20.161 10:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:20.161 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.161 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:20.161 10:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.161 10:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=79 00:17:20.161 10:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 79 -ge 100 ']' 00:17:20.161 10:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:17:20.421 10:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:17:20.421 10:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:20.421 10:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:20.421 10:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:20.421 10:26:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.421 10:26:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:20.421 10:26:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.421 10:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:17:20.421 10:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:17:20.421 10:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:17:20.421 10:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:17:20.421 10:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:17:20.422 10:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:20.422 10:26:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.422 10:26:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:20.422 [2024-07-14 10:26:05.251080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.422 [2024-07-14 10:26:05.251778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.422 [2024-07-14 10:26:05.251787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.423 [2024-07-14 10:26:05.251795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.423 [2024-07-14 10:26:05.251804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.423 [2024-07-14 10:26:05.251813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.423 [2024-07-14 10:26:05.251822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.423 [2024-07-14 10:26:05.251829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.423 [2024-07-14 10:26:05.251839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.423 [2024-07-14 10:26:05.251847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.423 [2024-07-14 10:26:05.251856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.423 [2024-07-14 10:26:05.251863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.423 [2024-07-14 10:26:05.251872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.423 [2024-07-14 10:26:05.251879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.423 [2024-07-14 10:26:05.251888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.423 [2024-07-14 10:26:05.251895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.423 [2024-07-14 10:26:05.251904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.423 [2024-07-14 10:26:05.251911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.423 [2024-07-14 10:26:05.251920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.423 [2024-07-14 10:26:05.251928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.423 [2024-07-14 10:26:05.251936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.423 [2024-07-14 10:26:05.251944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.423 [2024-07-14 10:26:05.251952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.423 [2024-07-14 10:26:05.251960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.423 [2024-07-14 10:26:05.251969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.423 [2024-07-14 10:26:05.251976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.423 [2024-07-14 10:26:05.251987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.423 [2024-07-14 10:26:05.251994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.423 [2024-07-14 10:26:05.252002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.423 [2024-07-14 10:26:05.252010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.423 [2024-07-14 10:26:05.252018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.423 [2024-07-14 10:26:05.252027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.423 [2024-07-14 10:26:05.252036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.423 [2024-07-14 10:26:05.252044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.423 [2024-07-14 10:26:05.252052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.423 [2024-07-14 10:26:05.252059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.423 [2024-07-14 10:26:05.252068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.423 [2024-07-14 10:26:05.252076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.423 [2024-07-14 10:26:05.252085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.423 [2024-07-14 10:26:05.252092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.423 [2024-07-14 10:26:05.252101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.423 [2024-07-14 10:26:05.252108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.423 [2024-07-14 10:26:05.252117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.423 [2024-07-14 10:26:05.252124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.423 [2024-07-14 10:26:05.252134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.423 [2024-07-14 10:26:05.252141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.423 [2024-07-14 10:26:05.252149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.423 [2024-07-14 10:26:05.252156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.423 [2024-07-14 10:26:05.252165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.423 [2024-07-14 10:26:05.252172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.423 [2024-07-14 10:26:05.252239] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fab7f0 was disconnected and freed. reset controller. 00:17:20.423 [2024-07-14 10:26:05.253152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:20.423 task offset: 101120 on job bdev=Nvme0n1 fails 00:17:20.423 00:17:20.423 Latency(us) 00:17:20.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.423 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.423 Job: Nvme0n1 ended in about 0.41 seconds with error 00:17:20.423 Verification LBA range: start 0x0 length 0x400 00:17:20.423 Nvme0n1 : 0.41 1884.25 117.77 157.02 0.00 30511.00 1609.91 28151.99 00:17:20.423 =================================================================================================================== 00:17:20.423 Total : 1884.25 117.77 157.02 0.00 30511.00 1609.91 28151.99 00:17:20.423 [2024-07-14 10:26:05.254763] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:20.423 [2024-07-14 10:26:05.254781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9a2d0 (9): Bad file descriptor 00:17:20.423 10:26:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.423 10:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:20.423 10:26:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.423 10:26:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:20.423 10:26:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.423 10:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:17:20.423 [2024-07-14 10:26:05.306261] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:21.361 10:26:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2363478 00:17:21.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2363478) - No such process 00:17:21.361 10:26:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:17:21.361 10:26:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:21.361 10:26:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:21.361 10:26:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:21.361 10:26:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:17:21.361 10:26:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:17:21.361 10:26:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:21.361 10:26:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:21.361 { 00:17:21.361 "params": { 00:17:21.361 "name": "Nvme$subsystem", 00:17:21.361 "trtype": "$TEST_TRANSPORT", 00:17:21.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:21.361 "adrfam": "ipv4", 00:17:21.361 "trsvcid": "$NVMF_PORT", 00:17:21.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:21.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:21.361 "hdgst": ${hdgst:-false}, 00:17:21.361 "ddgst": ${ddgst:-false} 00:17:21.361 }, 00:17:21.361 "method": "bdev_nvme_attach_controller" 00:17:21.361 } 00:17:21.361 EOF 00:17:21.361 )") 00:17:21.361 10:26:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:17:21.361 10:26:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:17:21.361 10:26:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:17:21.361 10:26:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:21.361 "params": { 00:17:21.361 "name": "Nvme0", 00:17:21.361 "trtype": "tcp", 00:17:21.361 "traddr": "10.0.0.2", 00:17:21.361 "adrfam": "ipv4", 00:17:21.361 "trsvcid": "4420", 00:17:21.361 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:21.361 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:21.361 "hdgst": false, 00:17:21.361 "ddgst": false 00:17:21.361 }, 00:17:21.361 "method": "bdev_nvme_attach_controller" 00:17:21.361 }' 00:17:21.361 [2024-07-14 10:26:06.316900] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:21.361 [2024-07-14 10:26:06.316949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2363865 ] 00:17:21.361 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.620 [2024-07-14 10:26:06.382469] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.620 [2024-07-14 10:26:06.420430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.879 Running I/O for 1 seconds... 00:17:22.816 00:17:22.816 Latency(us) 00:17:22.816 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.816 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:22.816 Verification LBA range: start 0x0 length 0x400 00:17:22.816 Nvme0n1 : 1.01 1986.40 124.15 0.00 0.00 31585.83 2849.39 27354.16 00:17:22.816 =================================================================================================================== 00:17:22.816 Total : 1986.40 124.15 0.00 0.00 31585.83 2849.39 27354.16 00:17:22.816 10:26:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:17:22.816 10:26:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:23.075 10:26:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:23.075 10:26:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:23.075 10:26:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:17:23.075 10:26:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:23.075 10:26:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:17:23.075 10:26:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:23.075 10:26:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:17:23.075 10:26:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:23.075 10:26:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:23.075 rmmod nvme_tcp 00:17:23.075 rmmod nvme_fabrics 00:17:23.075 rmmod nvme_keyring 00:17:23.075 10:26:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:23.075 10:26:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:17:23.075 10:26:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:17:23.075 10:26:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2363359 ']' 00:17:23.075 10:26:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2363359 00:17:23.075 10:26:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 2363359 ']' 00:17:23.075 10:26:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 2363359 00:17:23.075 10:26:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:17:23.075 10:26:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:23.075 10:26:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2363359 00:17:23.075 10:26:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:23.075 10:26:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:23.075 10:26:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2363359' 00:17:23.075 killing process with pid 2363359 00:17:23.075 10:26:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 2363359 00:17:23.075 10:26:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 2363359 00:17:23.334 [2024-07-14 10:26:08.076902] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:23.334 10:26:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:23.334 10:26:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:23.334 10:26:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:23.334 10:26:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:23.334 10:26:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:23.334 10:26:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.334 10:26:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:23.334 10:26:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.237 10:26:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:25.237 10:26:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:17:25.237 00:17:25.237 real 0m11.965s 00:17:25.237 user 0m19.054s 00:17:25.237 sys 0m5.302s 00:17:25.237 10:26:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:25.237 10:26:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:25.237 ************************************ 00:17:25.237 END TEST nvmf_host_management 00:17:25.237 ************************************ 00:17:25.237 10:26:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:25.237 10:26:10 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:25.237 10:26:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:25.237 10:26:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:25.237 10:26:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:25.497 ************************************ 00:17:25.497 START TEST nvmf_lvol 00:17:25.497 ************************************ 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:25.497 * Looking for test storage... 00:17:25.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.497 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:25.498 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:25.498 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:25.498 10:26:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:25.498 10:26:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:25.498 10:26:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:25.498 10:26:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:25.498 10:26:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.498 10:26:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:25.498 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:25.498 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.498 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:25.498 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:25.498 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:25.498 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.498 10:26:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.498 10:26:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.498 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:25.498 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:25.498 10:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:17:25.498 10:26:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:32.071 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.071 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:32.072 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:32.072 Found net devices under 0000:86:00.0: cvl_0_0 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:32.072 Found net devices under 0000:86:00.1: cvl_0_1 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:32.072 10:26:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:32.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:32.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:17:32.072 00:17:32.072 --- 10.0.0.2 ping statistics --- 00:17:32.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.072 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:32.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:32.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:17:32.072 00:17:32.072 --- 10.0.0.1 ping statistics --- 00:17:32.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.072 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2367608 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2367608 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 2367608 ']' 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:32.072 10:26:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:32.072 [2024-07-14 10:26:16.218238] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:32.072 [2024-07-14 10:26:16.218282] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.072 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.072 [2024-07-14 10:26:16.288190] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:32.072 [2024-07-14 10:26:16.328557] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:32.072 [2024-07-14 10:26:16.328594] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:32.072 [2024-07-14 10:26:16.328601] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:32.072 [2024-07-14 10:26:16.328607] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:32.072 [2024-07-14 10:26:16.328613] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:32.072 [2024-07-14 10:26:16.328672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.072 [2024-07-14 10:26:16.328779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.072 [2024-07-14 10:26:16.328779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.072 10:26:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:32.072 10:26:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:17:32.072 10:26:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:32.072 10:26:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:32.072 10:26:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:32.331 10:26:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.331 10:26:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:32.331 [2024-07-14 10:26:17.219993] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:32.331 10:26:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:32.590 10:26:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:32.590 10:26:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:32.850 10:26:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:32.850 10:26:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:33.109 10:26:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:33.109 10:26:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f579ee2b-34f0-4773-b8f2-6ce99f619816 00:17:33.109 10:26:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f579ee2b-34f0-4773-b8f2-6ce99f619816 lvol 20 00:17:33.368 10:26:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=310d4240-cf38-4f49-aedc-c7df304ddb6a 00:17:33.368 10:26:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:33.627 10:26:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 310d4240-cf38-4f49-aedc-c7df304ddb6a 00:17:33.627 10:26:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:33.885 [2024-07-14 10:26:18.761187] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:33.885 10:26:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:34.144 10:26:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2368106 00:17:34.144 10:26:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:34.144 10:26:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:34.144 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.079 10:26:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 310d4240-cf38-4f49-aedc-c7df304ddb6a MY_SNAPSHOT 00:17:35.338 10:26:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ca0d0e37-ca9b-4665-b619-fa3c711f7550 00:17:35.338 10:26:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 310d4240-cf38-4f49-aedc-c7df304ddb6a 30 00:17:35.597 10:26:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ca0d0e37-ca9b-4665-b619-fa3c711f7550 MY_CLONE 00:17:35.856 10:26:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8ef7a0a9-8a9b-4135-8fe7-9d9f08e8406a 00:17:35.856 10:26:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 8ef7a0a9-8a9b-4135-8fe7-9d9f08e8406a 00:17:36.427 10:26:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2368106 00:17:44.617 Initializing NVMe Controllers 00:17:44.617 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:44.617 Controller IO queue size 128, less than required. 00:17:44.617 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:44.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:44.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:44.617 Initialization complete. Launching workers. 00:17:44.617 ======================================================== 00:17:44.617 Latency(us) 00:17:44.617 Device Information : IOPS MiB/s Average min max 00:17:44.617 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12180.60 47.58 10510.16 1487.35 49954.13 00:17:44.617 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11980.60 46.80 10683.94 3628.45 44801.61 00:17:44.617 ======================================================== 00:17:44.617 Total : 24161.20 94.38 10596.33 1487.35 49954.13 00:17:44.617 00:17:44.617 10:26:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:44.617 10:26:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 310d4240-cf38-4f49-aedc-c7df304ddb6a 00:17:44.876 10:26:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f579ee2b-34f0-4773-b8f2-6ce99f619816 00:17:45.135 10:26:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:45.135 10:26:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:45.135 10:26:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:45.135 10:26:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:45.135 10:26:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:17:45.135 10:26:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:45.135 10:26:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:17:45.135 10:26:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:45.135 10:26:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:45.135 rmmod nvme_tcp 00:17:45.135 rmmod nvme_fabrics 00:17:45.135 rmmod nvme_keyring 00:17:45.135 10:26:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:45.135 10:26:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:17:45.135 10:26:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:17:45.135 10:26:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2367608 ']' 00:17:45.135 10:26:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2367608 00:17:45.135 10:26:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 2367608 ']' 00:17:45.135 10:26:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 2367608 00:17:45.135 10:26:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:17:45.135 10:26:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:45.135 10:26:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2367608 00:17:45.135 10:26:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:45.135 10:26:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:45.135 10:26:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2367608' 00:17:45.135 killing process with pid 2367608 00:17:45.135 10:26:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 2367608 00:17:45.135 10:26:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 2367608 00:17:45.395 10:26:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:45.395 10:26:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:45.395 10:26:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:45.395 10:26:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:45.395 10:26:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:45.395 10:26:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.395 10:26:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:45.395 10:26:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:47.929 00:17:47.929 real 0m22.058s 00:17:47.929 user 1m4.393s 00:17:47.929 sys 0m7.118s 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:47.929 ************************************ 00:17:47.929 END TEST nvmf_lvol 00:17:47.929 ************************************ 00:17:47.929 10:26:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:47.929 10:26:32 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:47.929 10:26:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:47.929 10:26:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:47.929 10:26:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:47.929 ************************************ 00:17:47.929 START TEST nvmf_lvs_grow 00:17:47.929 ************************************ 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:47.929 * Looking for test storage... 00:17:47.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:17:47.929 10:26:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:53.208 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:53.208 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:53.208 Found net devices under 0000:86:00.0: cvl_0_0 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:53.208 Found net devices under 0000:86:00.1: cvl_0_1 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:53.208 10:26:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:53.208 10:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:53.208 10:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:53.208 10:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:53.208 10:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:53.208 10:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:53.208 10:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:53.208 10:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:53.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:53.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:17:53.208 00:17:53.208 --- 10.0.0.2 ping statistics --- 00:17:53.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.208 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:17:53.208 10:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:53.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:53.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:17:53.208 00:17:53.208 --- 10.0.0.1 ping statistics --- 00:17:53.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.208 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:17:53.208 10:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:53.208 10:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:17:53.208 10:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:53.208 10:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:53.208 10:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:53.208 10:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:53.208 10:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:53.208 10:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:53.209 10:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:53.468 10:26:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:17:53.468 10:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:53.468 10:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:53.468 10:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:53.468 10:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2373245 00:17:53.468 10:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:53.468 10:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2373245 00:17:53.468 10:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 2373245 ']' 00:17:53.468 10:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.468 10:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.468 10:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.468 10:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.469 10:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:53.469 [2024-07-14 10:26:38.259425] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:53.469 [2024-07-14 10:26:38.259471] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.469 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.469 [2024-07-14 10:26:38.331889] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.469 [2024-07-14 10:26:38.372170] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.469 [2024-07-14 10:26:38.372211] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.469 [2024-07-14 10:26:38.372222] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:53.469 [2024-07-14 10:26:38.372233] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:53.469 [2024-07-14 10:26:38.372238] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.469 [2024-07-14 10:26:38.372256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.727 10:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.727 10:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:17:53.727 10:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:53.727 10:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:53.727 10:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:53.727 10:26:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.727 10:26:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:53.727 [2024-07-14 10:26:38.649481] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.727 10:26:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:53.727 10:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:53.727 10:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:53.727 10:26:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:53.727 ************************************ 00:17:53.727 START TEST lvs_grow_clean 00:17:53.727 ************************************ 00:17:53.727 10:26:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:17:53.727 10:26:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:53.727 10:26:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:53.727 10:26:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:53.727 10:26:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:53.727 10:26:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:53.727 10:26:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:53.727 10:26:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:53.727 10:26:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:53.727 10:26:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:53.987 10:26:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:53.987 10:26:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:54.246 10:26:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a20f847a-12ec-4539-b418-bbf8061f50b1 00:17:54.246 10:26:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a20f847a-12ec-4539-b418-bbf8061f50b1 00:17:54.246 10:26:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:54.505 10:26:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:54.505 10:26:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:54.505 10:26:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a20f847a-12ec-4539-b418-bbf8061f50b1 lvol 150 00:17:54.506 10:26:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d2a0cf26-b939-4a0a-80f1-130a86f5db6c 00:17:54.506 10:26:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:54.506 10:26:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:54.765 [2024-07-14 10:26:39.568935] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:54.765 [2024-07-14 10:26:39.568981] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:54.765 true 00:17:54.765 10:26:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a20f847a-12ec-4539-b418-bbf8061f50b1 00:17:54.765 10:26:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:55.024 10:26:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:55.024 10:26:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:55.024 10:26:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d2a0cf26-b939-4a0a-80f1-130a86f5db6c 00:17:55.284 10:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:55.544 [2024-07-14 10:26:40.267052] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:55.544 10:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:55.544 10:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:55.544 10:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2373735 00:17:55.544 10:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:55.544 10:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2373735 /var/tmp/bdevperf.sock 00:17:55.544 10:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 2373735 ']' 00:17:55.544 10:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:55.544 10:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:55.544 10:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:55.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:55.544 10:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:55.544 10:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:55.544 [2024-07-14 10:26:40.480068] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:55.544 [2024-07-14 10:26:40.480119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2373735 ] 00:17:55.544 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.804 [2024-07-14 10:26:40.547332] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.804 [2024-07-14 10:26:40.587422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:55.804 10:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:55.804 10:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:17:55.804 10:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:56.064 Nvme0n1 00:17:56.064 10:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:56.324 [ 00:17:56.324 { 00:17:56.324 "name": "Nvme0n1", 00:17:56.324 "aliases": [ 00:17:56.324 "d2a0cf26-b939-4a0a-80f1-130a86f5db6c" 00:17:56.324 ], 00:17:56.324 "product_name": "NVMe disk", 00:17:56.324 "block_size": 4096, 00:17:56.324 "num_blocks": 38912, 00:17:56.324 "uuid": "d2a0cf26-b939-4a0a-80f1-130a86f5db6c", 00:17:56.324 "assigned_rate_limits": { 00:17:56.324 "rw_ios_per_sec": 0, 00:17:56.324 "rw_mbytes_per_sec": 0, 00:17:56.324 "r_mbytes_per_sec": 0, 00:17:56.324 "w_mbytes_per_sec": 0 00:17:56.324 }, 00:17:56.324 "claimed": false, 00:17:56.324 "zoned": false, 00:17:56.324 "supported_io_types": { 00:17:56.324 "read": true, 00:17:56.324 "write": true, 00:17:56.324 "unmap": true, 00:17:56.324 "flush": true, 00:17:56.324 "reset": true, 00:17:56.324 "nvme_admin": true, 00:17:56.324 "nvme_io": true, 00:17:56.324 "nvme_io_md": false, 00:17:56.324 "write_zeroes": true, 00:17:56.324 "zcopy": false, 00:17:56.324 "get_zone_info": false, 00:17:56.324 "zone_management": false, 00:17:56.324 "zone_append": false, 00:17:56.324 "compare": true, 00:17:56.324 "compare_and_write": true, 00:17:56.324 "abort": true, 00:17:56.324 "seek_hole": false, 00:17:56.324 "seek_data": false, 00:17:56.324 "copy": true, 00:17:56.324 "nvme_iov_md": false 00:17:56.324 }, 00:17:56.324 "memory_domains": [ 00:17:56.324 { 00:17:56.324 "dma_device_id": "system", 00:17:56.324 "dma_device_type": 1 00:17:56.324 } 00:17:56.324 ], 00:17:56.324 "driver_specific": { 00:17:56.324 "nvme": [ 00:17:56.324 { 00:17:56.324 "trid": { 00:17:56.324 "trtype": "TCP", 00:17:56.324 "adrfam": "IPv4", 00:17:56.324 "traddr": "10.0.0.2", 00:17:56.324 "trsvcid": "4420", 00:17:56.324 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:56.324 }, 00:17:56.324 "ctrlr_data": { 00:17:56.324 "cntlid": 1, 00:17:56.324 "vendor_id": "0x8086", 00:17:56.324 "model_number": "SPDK bdev Controller", 00:17:56.324 "serial_number": "SPDK0", 00:17:56.324 "firmware_revision": "24.09", 00:17:56.324 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:56.324 "oacs": { 00:17:56.324 "security": 0, 00:17:56.324 "format": 0, 00:17:56.324 "firmware": 0, 00:17:56.324 "ns_manage": 0 00:17:56.324 }, 00:17:56.324 "multi_ctrlr": true, 00:17:56.324 "ana_reporting": false 00:17:56.324 }, 00:17:56.324 "vs": { 00:17:56.324 "nvme_version": "1.3" 00:17:56.324 }, 00:17:56.324 "ns_data": { 00:17:56.324 "id": 1, 00:17:56.324 "can_share": true 00:17:56.324 } 00:17:56.324 } 00:17:56.324 ], 00:17:56.324 "mp_policy": "active_passive" 00:17:56.324 } 00:17:56.324 } 00:17:56.324 ] 00:17:56.324 10:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2373756 00:17:56.324 10:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:56.324 10:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:56.324 Running I/O for 10 seconds... 00:17:57.264 Latency(us) 00:17:57.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.264 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:57.264 Nvme0n1 : 1.00 22988.00 89.80 0.00 0.00 0.00 0.00 0.00 00:17:57.264 =================================================================================================================== 00:17:57.264 Total : 22988.00 89.80 0.00 0.00 0.00 0.00 0.00 00:17:57.264 00:17:58.200 10:26:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a20f847a-12ec-4539-b418-bbf8061f50b1 00:17:58.459 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:58.459 Nvme0n1 : 2.00 23159.50 90.47 0.00 0.00 0.00 0.00 0.00 00:17:58.459 =================================================================================================================== 00:17:58.459 Total : 23159.50 90.47 0.00 0.00 0.00 0.00 0.00 00:17:58.459 00:17:58.459 true 00:17:58.459 10:26:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a20f847a-12ec-4539-b418-bbf8061f50b1 00:17:58.459 10:26:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:58.718 10:26:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:58.718 10:26:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:58.718 10:26:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2373756 00:17:59.313 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:59.313 Nvme0n1 : 3.00 23229.00 90.74 0.00 0.00 0.00 0.00 0.00 00:17:59.313 =================================================================================================================== 00:17:59.313 Total : 23229.00 90.74 0.00 0.00 0.00 0.00 0.00 00:17:59.313 00:18:00.251 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:00.251 Nvme0n1 : 4.00 23315.50 91.08 0.00 0.00 0.00 0.00 0.00 00:18:00.251 =================================================================================================================== 00:18:00.251 Total : 23315.50 91.08 0.00 0.00 0.00 0.00 0.00 00:18:00.251 00:18:01.628 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:01.628 Nvme0n1 : 5.00 23349.20 91.21 0.00 0.00 0.00 0.00 0.00 00:18:01.628 =================================================================================================================== 00:18:01.628 Total : 23349.20 91.21 0.00 0.00 0.00 0.00 0.00 00:18:01.628 00:18:02.654 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:02.654 Nvme0n1 : 6.00 23385.00 91.35 0.00 0.00 0.00 0.00 0.00 00:18:02.654 =================================================================================================================== 00:18:02.654 Total : 23385.00 91.35 0.00 0.00 0.00 0.00 0.00 00:18:02.654 00:18:03.222 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:03.222 Nvme0n1 : 7.00 23412.57 91.46 0.00 0.00 0.00 0.00 0.00 00:18:03.222 =================================================================================================================== 00:18:03.222 Total : 23412.57 91.46 0.00 0.00 0.00 0.00 0.00 00:18:03.222 00:18:04.600 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:04.600 Nvme0n1 : 8.00 23441.12 91.57 0.00 0.00 0.00 0.00 0.00 00:18:04.600 =================================================================================================================== 00:18:04.600 Total : 23441.12 91.57 0.00 0.00 0.00 0.00 0.00 00:18:04.600 00:18:05.537 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:05.537 Nvme0n1 : 9.00 23443.56 91.58 0.00 0.00 0.00 0.00 0.00 00:18:05.537 =================================================================================================================== 00:18:05.537 Total : 23443.56 91.58 0.00 0.00 0.00 0.00 0.00 00:18:05.537 00:18:06.474 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:06.474 Nvme0n1 : 10.00 23461.80 91.65 0.00 0.00 0.00 0.00 0.00 00:18:06.474 =================================================================================================================== 00:18:06.474 Total : 23461.80 91.65 0.00 0.00 0.00 0.00 0.00 00:18:06.474 00:18:06.474 00:18:06.474 Latency(us) 00:18:06.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.474 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:06.474 Nvme0n1 : 10.00 23462.36 91.65 0.00 0.00 5452.42 1909.09 11910.46 00:18:06.474 =================================================================================================================== 00:18:06.474 Total : 23462.36 91.65 0.00 0.00 5452.42 1909.09 11910.46 00:18:06.474 0 00:18:06.474 10:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2373735 00:18:06.474 10:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 2373735 ']' 00:18:06.474 10:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 2373735 00:18:06.474 10:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:18:06.474 10:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:06.474 10:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2373735 00:18:06.474 10:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:06.474 10:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:06.474 10:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2373735' 00:18:06.474 killing process with pid 2373735 00:18:06.474 10:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 2373735 00:18:06.474 Received shutdown signal, test time was about 10.000000 seconds 00:18:06.474 00:18:06.474 Latency(us) 00:18:06.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.474 =================================================================================================================== 00:18:06.474 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:06.474 10:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 2373735 00:18:06.474 10:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:06.733 10:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:06.991 10:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a20f847a-12ec-4539-b418-bbf8061f50b1 00:18:06.991 10:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:18:07.250 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:18:07.250 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:18:07.250 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:07.250 [2024-07-14 10:26:52.168826] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:07.250 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a20f847a-12ec-4539-b418-bbf8061f50b1 00:18:07.250 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:18:07.250 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a20f847a-12ec-4539-b418-bbf8061f50b1 00:18:07.250 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:07.250 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.250 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:07.250 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.250 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:07.250 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.250 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:07.250 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:07.250 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a20f847a-12ec-4539-b418-bbf8061f50b1 00:18:07.509 request: 00:18:07.509 { 00:18:07.509 "uuid": "a20f847a-12ec-4539-b418-bbf8061f50b1", 00:18:07.509 "method": "bdev_lvol_get_lvstores", 00:18:07.509 "req_id": 1 00:18:07.509 } 00:18:07.509 Got JSON-RPC error response 00:18:07.509 response: 00:18:07.509 { 00:18:07.509 "code": -19, 00:18:07.509 "message": "No such device" 00:18:07.509 } 00:18:07.509 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:18:07.509 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:07.509 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:07.509 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:07.509 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:07.767 aio_bdev 00:18:07.767 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d2a0cf26-b939-4a0a-80f1-130a86f5db6c 00:18:07.767 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=d2a0cf26-b939-4a0a-80f1-130a86f5db6c 00:18:07.767 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:07.767 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:18:07.767 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:07.767 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:07.767 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:07.768 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d2a0cf26-b939-4a0a-80f1-130a86f5db6c -t 2000 00:18:08.026 [ 00:18:08.026 { 00:18:08.026 "name": "d2a0cf26-b939-4a0a-80f1-130a86f5db6c", 00:18:08.026 "aliases": [ 00:18:08.026 "lvs/lvol" 00:18:08.026 ], 00:18:08.026 "product_name": "Logical Volume", 00:18:08.026 "block_size": 4096, 00:18:08.026 "num_blocks": 38912, 00:18:08.026 "uuid": "d2a0cf26-b939-4a0a-80f1-130a86f5db6c", 00:18:08.026 "assigned_rate_limits": { 00:18:08.026 "rw_ios_per_sec": 0, 00:18:08.026 "rw_mbytes_per_sec": 0, 00:18:08.026 "r_mbytes_per_sec": 0, 00:18:08.026 "w_mbytes_per_sec": 0 00:18:08.026 }, 00:18:08.026 "claimed": false, 00:18:08.026 "zoned": false, 00:18:08.026 "supported_io_types": { 00:18:08.026 "read": true, 00:18:08.026 "write": true, 00:18:08.026 "unmap": true, 00:18:08.026 "flush": false, 00:18:08.026 "reset": true, 00:18:08.026 "nvme_admin": false, 00:18:08.026 "nvme_io": false, 00:18:08.026 "nvme_io_md": false, 00:18:08.026 "write_zeroes": true, 00:18:08.026 "zcopy": false, 00:18:08.026 "get_zone_info": false, 00:18:08.026 "zone_management": false, 00:18:08.026 "zone_append": false, 00:18:08.026 "compare": false, 00:18:08.026 "compare_and_write": false, 00:18:08.026 "abort": false, 00:18:08.026 "seek_hole": true, 00:18:08.026 "seek_data": true, 00:18:08.026 "copy": false, 00:18:08.026 "nvme_iov_md": false 00:18:08.026 }, 00:18:08.026 "driver_specific": { 00:18:08.026 "lvol": { 00:18:08.026 "lvol_store_uuid": "a20f847a-12ec-4539-b418-bbf8061f50b1", 00:18:08.026 "base_bdev": "aio_bdev", 00:18:08.026 "thin_provision": false, 00:18:08.026 "num_allocated_clusters": 38, 00:18:08.026 "snapshot": false, 00:18:08.026 "clone": false, 00:18:08.026 "esnap_clone": false 00:18:08.026 } 00:18:08.026 } 00:18:08.026 } 00:18:08.026 ] 00:18:08.026 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:18:08.026 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a20f847a-12ec-4539-b418-bbf8061f50b1 00:18:08.026 10:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:18:08.285 10:26:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:18:08.285 10:26:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a20f847a-12ec-4539-b418-bbf8061f50b1 00:18:08.285 10:26:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:18:08.285 10:26:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:18:08.285 10:26:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d2a0cf26-b939-4a0a-80f1-130a86f5db6c 00:18:08.543 10:26:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a20f847a-12ec-4539-b418-bbf8061f50b1 00:18:08.801 10:26:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:08.801 10:26:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:09.060 00:18:09.060 real 0m15.108s 00:18:09.060 user 0m14.615s 00:18:09.060 sys 0m1.464s 00:18:09.060 10:26:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:09.060 10:26:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:18:09.060 ************************************ 00:18:09.060 END TEST lvs_grow_clean 00:18:09.060 ************************************ 00:18:09.060 10:26:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:18:09.060 10:26:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:09.060 10:26:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:09.060 10:26:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:09.060 10:26:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:09.060 ************************************ 00:18:09.060 START TEST lvs_grow_dirty 00:18:09.060 ************************************ 00:18:09.060 10:26:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:18:09.060 10:26:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:09.060 10:26:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:09.060 10:26:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:09.060 10:26:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:09.060 10:26:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:09.060 10:26:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:09.060 10:26:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:09.060 10:26:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:09.060 10:26:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:09.319 10:26:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:09.319 10:26:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:09.319 10:26:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=d0f2139d-20c7-4569-b7e8-0f5e9ecbf3b4 00:18:09.319 10:26:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0f2139d-20c7-4569-b7e8-0f5e9ecbf3b4 00:18:09.319 10:26:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:09.578 10:26:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:09.578 10:26:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:09.578 10:26:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d0f2139d-20c7-4569-b7e8-0f5e9ecbf3b4 lvol 150 00:18:09.837 10:26:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=47477f46-e568-4bd4-921b-c3c65f953acf 00:18:09.837 10:26:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:09.837 10:26:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:09.837 [2024-07-14 10:26:54.758872] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:09.837 [2024-07-14 10:26:54.758920] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:09.837 true 00:18:09.837 10:26:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0f2139d-20c7-4569-b7e8-0f5e9ecbf3b4 00:18:09.837 10:26:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:10.096 10:26:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:10.096 10:26:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:10.355 10:26:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 47477f46-e568-4bd4-921b-c3c65f953acf 00:18:10.355 10:26:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:10.614 [2024-07-14 10:26:55.424844] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:10.614 10:26:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:10.874 10:26:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:10.874 10:26:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2376320 00:18:10.874 10:26:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:10.874 10:26:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2376320 /var/tmp/bdevperf.sock 00:18:10.874 10:26:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2376320 ']' 00:18:10.874 10:26:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:10.874 10:26:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:10.874 10:26:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:10.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:10.874 10:26:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:10.874 10:26:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:10.874 [2024-07-14 10:26:55.627219] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:10.874 [2024-07-14 10:26:55.627269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2376320 ] 00:18:10.874 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.875 [2024-07-14 10:26:55.692239] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.875 [2024-07-14 10:26:55.733010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.875 10:26:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:10.875 10:26:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:18:10.875 10:26:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:11.442 Nvme0n1 00:18:11.442 10:26:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:11.442 [ 00:18:11.442 { 00:18:11.442 "name": "Nvme0n1", 00:18:11.442 "aliases": [ 00:18:11.442 "47477f46-e568-4bd4-921b-c3c65f953acf" 00:18:11.442 ], 00:18:11.442 "product_name": "NVMe disk", 00:18:11.442 "block_size": 4096, 00:18:11.442 "num_blocks": 38912, 00:18:11.442 "uuid": "47477f46-e568-4bd4-921b-c3c65f953acf", 00:18:11.442 "assigned_rate_limits": { 00:18:11.442 "rw_ios_per_sec": 0, 00:18:11.442 "rw_mbytes_per_sec": 0, 00:18:11.442 "r_mbytes_per_sec": 0, 00:18:11.442 "w_mbytes_per_sec": 0 00:18:11.442 }, 00:18:11.442 "claimed": false, 00:18:11.442 "zoned": false, 00:18:11.442 "supported_io_types": { 00:18:11.442 "read": true, 00:18:11.442 "write": true, 00:18:11.442 "unmap": true, 00:18:11.442 "flush": true, 00:18:11.442 "reset": true, 00:18:11.442 "nvme_admin": true, 00:18:11.442 "nvme_io": true, 00:18:11.442 "nvme_io_md": false, 00:18:11.442 "write_zeroes": true, 00:18:11.442 "zcopy": false, 00:18:11.442 "get_zone_info": false, 00:18:11.442 "zone_management": false, 00:18:11.442 "zone_append": false, 00:18:11.442 "compare": true, 00:18:11.442 "compare_and_write": true, 00:18:11.442 "abort": true, 00:18:11.442 "seek_hole": false, 00:18:11.442 "seek_data": false, 00:18:11.442 "copy": true, 00:18:11.442 "nvme_iov_md": false 00:18:11.442 }, 00:18:11.442 "memory_domains": [ 00:18:11.442 { 00:18:11.442 "dma_device_id": "system", 00:18:11.442 "dma_device_type": 1 00:18:11.442 } 00:18:11.442 ], 00:18:11.442 "driver_specific": { 00:18:11.442 "nvme": [ 00:18:11.442 { 00:18:11.442 "trid": { 00:18:11.442 "trtype": "TCP", 00:18:11.442 "adrfam": "IPv4", 00:18:11.442 "traddr": "10.0.0.2", 00:18:11.442 "trsvcid": "4420", 00:18:11.442 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:11.442 }, 00:18:11.442 "ctrlr_data": { 00:18:11.442 "cntlid": 1, 00:18:11.442 "vendor_id": "0x8086", 00:18:11.442 "model_number": "SPDK bdev Controller", 00:18:11.442 "serial_number": "SPDK0", 00:18:11.442 "firmware_revision": "24.09", 00:18:11.442 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:11.442 "oacs": { 00:18:11.442 "security": 0, 00:18:11.442 "format": 0, 00:18:11.442 "firmware": 0, 00:18:11.442 "ns_manage": 0 00:18:11.442 }, 00:18:11.442 "multi_ctrlr": true, 00:18:11.442 "ana_reporting": false 00:18:11.442 }, 00:18:11.442 "vs": { 00:18:11.442 "nvme_version": "1.3" 00:18:11.442 }, 00:18:11.442 "ns_data": { 00:18:11.442 "id": 1, 00:18:11.442 "can_share": true 00:18:11.442 } 00:18:11.442 } 00:18:11.442 ], 00:18:11.442 "mp_policy": "active_passive" 00:18:11.442 } 00:18:11.442 } 00:18:11.442 ] 00:18:11.442 10:26:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:11.442 10:26:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2376334 00:18:11.442 10:26:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:11.701 Running I/O for 10 seconds... 00:18:12.638 Latency(us) 00:18:12.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.638 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:12.638 Nvme0n1 : 1.00 23183.00 90.56 0.00 0.00 0.00 0.00 0.00 00:18:12.638 =================================================================================================================== 00:18:12.638 Total : 23183.00 90.56 0.00 0.00 0.00 0.00 0.00 00:18:12.638 00:18:13.576 10:26:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d0f2139d-20c7-4569-b7e8-0f5e9ecbf3b4 00:18:13.576 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:13.576 Nvme0n1 : 2.00 23341.00 91.18 0.00 0.00 0.00 0.00 0.00 00:18:13.576 =================================================================================================================== 00:18:13.576 Total : 23341.00 91.18 0.00 0.00 0.00 0.00 0.00 00:18:13.576 00:18:13.576 true 00:18:13.835 10:26:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0f2139d-20c7-4569-b7e8-0f5e9ecbf3b4 00:18:13.835 10:26:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:13.835 10:26:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:13.835 10:26:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:13.835 10:26:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2376334 00:18:14.772 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:14.772 Nvme0n1 : 3.00 23351.00 91.21 0.00 0.00 0.00 0.00 0.00 00:18:14.772 =================================================================================================================== 00:18:14.772 Total : 23351.00 91.21 0.00 0.00 0.00 0.00 0.00 00:18:14.772 00:18:15.712 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:15.712 Nvme0n1 : 4.00 23323.50 91.11 0.00 0.00 0.00 0.00 0.00 00:18:15.712 =================================================================================================================== 00:18:15.712 Total : 23323.50 91.11 0.00 0.00 0.00 0.00 0.00 00:18:15.712 00:18:16.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:16.650 Nvme0n1 : 5.00 23284.20 90.95 0.00 0.00 0.00 0.00 0.00 00:18:16.650 =================================================================================================================== 00:18:16.650 Total : 23284.20 90.95 0.00 0.00 0.00 0.00 0.00 00:18:16.650 00:18:17.584 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:17.584 Nvme0n1 : 6.00 23291.00 90.98 0.00 0.00 0.00 0.00 0.00 00:18:17.584 =================================================================================================================== 00:18:17.584 Total : 23291.00 90.98 0.00 0.00 0.00 0.00 0.00 00:18:17.584 00:18:18.571 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:18.571 Nvme0n1 : 7.00 23266.00 90.88 0.00 0.00 0.00 0.00 0.00 00:18:18.571 =================================================================================================================== 00:18:18.571 Total : 23266.00 90.88 0.00 0.00 0.00 0.00 0.00 00:18:18.571 00:18:19.504 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:19.504 Nvme0n1 : 8.00 23264.25 90.88 0.00 0.00 0.00 0.00 0.00 00:18:19.504 =================================================================================================================== 00:18:19.504 Total : 23264.25 90.88 0.00 0.00 0.00 0.00 0.00 00:18:19.504 00:18:20.881 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:20.881 Nvme0n1 : 9.00 23261.67 90.87 0.00 0.00 0.00 0.00 0.00 00:18:20.881 =================================================================================================================== 00:18:20.881 Total : 23261.67 90.87 0.00 0.00 0.00 0.00 0.00 00:18:20.881 00:18:21.819 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:21.819 Nvme0n1 : 10.00 23272.60 90.91 0.00 0.00 0.00 0.00 0.00 00:18:21.819 =================================================================================================================== 00:18:21.819 Total : 23272.60 90.91 0.00 0.00 0.00 0.00 0.00 00:18:21.819 00:18:21.819 00:18:21.819 Latency(us) 00:18:21.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.819 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:21.819 Nvme0n1 : 10.00 23274.14 90.91 0.00 0.00 5496.50 3276.80 10770.70 00:18:21.819 =================================================================================================================== 00:18:21.819 Total : 23274.14 90.91 0.00 0.00 5496.50 3276.80 10770.70 00:18:21.819 0 00:18:21.819 10:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2376320 00:18:21.819 10:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 2376320 ']' 00:18:21.819 10:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 2376320 00:18:21.819 10:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:18:21.819 10:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:21.819 10:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2376320 00:18:21.819 10:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:21.819 10:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:21.819 10:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2376320' 00:18:21.819 killing process with pid 2376320 00:18:21.819 10:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 2376320 00:18:21.819 Received shutdown signal, test time was about 10.000000 seconds 00:18:21.819 00:18:21.819 Latency(us) 00:18:21.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.819 =================================================================================================================== 00:18:21.819 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:21.819 10:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 2376320 00:18:21.819 10:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:22.078 10:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:22.338 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0f2139d-20c7-4569-b7e8-0f5e9ecbf3b4 00:18:22.338 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:18:22.338 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:18:22.338 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:18:22.338 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2373245 00:18:22.338 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2373245 00:18:22.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2373245 Killed "${NVMF_APP[@]}" "$@" 00:18:22.338 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:18:22.338 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:18:22.338 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:22.338 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:22.338 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:22.338 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2378298 00:18:22.338 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2378298 00:18:22.338 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:22.338 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2378298 ']' 00:18:22.338 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.338 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:22.338 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.338 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:22.338 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:22.597 [2024-07-14 10:27:07.346135] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:22.597 [2024-07-14 10:27:07.346183] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.597 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.597 [2024-07-14 10:27:07.419028] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.597 [2024-07-14 10:27:07.459001] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:22.597 [2024-07-14 10:27:07.459040] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:22.597 [2024-07-14 10:27:07.459047] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:22.597 [2024-07-14 10:27:07.459053] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:22.597 [2024-07-14 10:27:07.459059] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:22.597 [2024-07-14 10:27:07.459075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.597 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:22.597 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:18:22.597 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:22.597 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:22.597 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:22.597 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.597 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:22.856 [2024-07-14 10:27:07.734460] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:22.856 [2024-07-14 10:27:07.734539] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:22.856 [2024-07-14 10:27:07.734563] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:22.856 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:18:22.856 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 47477f46-e568-4bd4-921b-c3c65f953acf 00:18:22.856 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=47477f46-e568-4bd4-921b-c3c65f953acf 00:18:22.856 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:22.856 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:18:22.856 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:22.856 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:22.856 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:23.115 10:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 47477f46-e568-4bd4-921b-c3c65f953acf -t 2000 00:18:23.374 [ 00:18:23.374 { 00:18:23.374 "name": "47477f46-e568-4bd4-921b-c3c65f953acf", 00:18:23.374 "aliases": [ 00:18:23.374 "lvs/lvol" 00:18:23.374 ], 00:18:23.374 "product_name": "Logical Volume", 00:18:23.374 "block_size": 4096, 00:18:23.374 "num_blocks": 38912, 00:18:23.374 "uuid": "47477f46-e568-4bd4-921b-c3c65f953acf", 00:18:23.374 "assigned_rate_limits": { 00:18:23.374 "rw_ios_per_sec": 0, 00:18:23.374 "rw_mbytes_per_sec": 0, 00:18:23.374 "r_mbytes_per_sec": 0, 00:18:23.374 "w_mbytes_per_sec": 0 00:18:23.374 }, 00:18:23.374 "claimed": false, 00:18:23.374 "zoned": false, 00:18:23.374 "supported_io_types": { 00:18:23.374 "read": true, 00:18:23.374 "write": true, 00:18:23.374 "unmap": true, 00:18:23.374 "flush": false, 00:18:23.374 "reset": true, 00:18:23.374 "nvme_admin": false, 00:18:23.374 "nvme_io": false, 00:18:23.374 "nvme_io_md": false, 00:18:23.374 "write_zeroes": true, 00:18:23.374 "zcopy": false, 00:18:23.374 "get_zone_info": false, 00:18:23.374 "zone_management": false, 00:18:23.374 "zone_append": false, 00:18:23.374 "compare": false, 00:18:23.374 "compare_and_write": false, 00:18:23.374 "abort": false, 00:18:23.374 "seek_hole": true, 00:18:23.374 "seek_data": true, 00:18:23.374 "copy": false, 00:18:23.374 "nvme_iov_md": false 00:18:23.374 }, 00:18:23.374 "driver_specific": { 00:18:23.374 "lvol": { 00:18:23.374 "lvol_store_uuid": "d0f2139d-20c7-4569-b7e8-0f5e9ecbf3b4", 00:18:23.374 "base_bdev": "aio_bdev", 00:18:23.374 "thin_provision": false, 00:18:23.374 "num_allocated_clusters": 38, 00:18:23.374 "snapshot": false, 00:18:23.374 "clone": false, 00:18:23.374 "esnap_clone": false 00:18:23.374 } 00:18:23.374 } 00:18:23.374 } 00:18:23.374 ] 00:18:23.374 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:18:23.374 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0f2139d-20c7-4569-b7e8-0f5e9ecbf3b4 00:18:23.374 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:18:23.374 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:18:23.374 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0f2139d-20c7-4569-b7e8-0f5e9ecbf3b4 00:18:23.374 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:18:23.634 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:18:23.634 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:23.634 [2024-07-14 10:27:08.614943] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:23.893 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0f2139d-20c7-4569-b7e8-0f5e9ecbf3b4 00:18:23.893 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:18:23.893 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0f2139d-20c7-4569-b7e8-0f5e9ecbf3b4 00:18:23.893 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:23.893 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:23.893 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:23.893 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:23.893 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:23.893 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:23.893 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:23.893 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:23.893 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0f2139d-20c7-4569-b7e8-0f5e9ecbf3b4 00:18:23.893 request: 00:18:23.893 { 00:18:23.893 "uuid": "d0f2139d-20c7-4569-b7e8-0f5e9ecbf3b4", 00:18:23.893 "method": "bdev_lvol_get_lvstores", 00:18:23.893 "req_id": 1 00:18:23.893 } 00:18:23.893 Got JSON-RPC error response 00:18:23.893 response: 00:18:23.893 { 00:18:23.893 "code": -19, 00:18:23.893 "message": "No such device" 00:18:23.893 } 00:18:23.893 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:18:23.893 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:23.893 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:23.893 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:23.893 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:24.152 aio_bdev 00:18:24.152 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 47477f46-e568-4bd4-921b-c3c65f953acf 00:18:24.152 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=47477f46-e568-4bd4-921b-c3c65f953acf 00:18:24.152 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:24.152 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:18:24.152 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:24.152 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:24.152 10:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:24.411 10:27:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 47477f46-e568-4bd4-921b-c3c65f953acf -t 2000 00:18:24.411 [ 00:18:24.411 { 00:18:24.411 "name": "47477f46-e568-4bd4-921b-c3c65f953acf", 00:18:24.411 "aliases": [ 00:18:24.411 "lvs/lvol" 00:18:24.411 ], 00:18:24.411 "product_name": "Logical Volume", 00:18:24.411 "block_size": 4096, 00:18:24.411 "num_blocks": 38912, 00:18:24.411 "uuid": "47477f46-e568-4bd4-921b-c3c65f953acf", 00:18:24.411 "assigned_rate_limits": { 00:18:24.411 "rw_ios_per_sec": 0, 00:18:24.411 "rw_mbytes_per_sec": 0, 00:18:24.411 "r_mbytes_per_sec": 0, 00:18:24.411 "w_mbytes_per_sec": 0 00:18:24.411 }, 00:18:24.411 "claimed": false, 00:18:24.411 "zoned": false, 00:18:24.411 "supported_io_types": { 00:18:24.411 "read": true, 00:18:24.411 "write": true, 00:18:24.411 "unmap": true, 00:18:24.411 "flush": false, 00:18:24.411 "reset": true, 00:18:24.411 "nvme_admin": false, 00:18:24.411 "nvme_io": false, 00:18:24.411 "nvme_io_md": false, 00:18:24.411 "write_zeroes": true, 00:18:24.411 "zcopy": false, 00:18:24.411 "get_zone_info": false, 00:18:24.411 "zone_management": false, 00:18:24.411 "zone_append": false, 00:18:24.411 "compare": false, 00:18:24.411 "compare_and_write": false, 00:18:24.411 "abort": false, 00:18:24.411 "seek_hole": true, 00:18:24.411 "seek_data": true, 00:18:24.411 "copy": false, 00:18:24.411 "nvme_iov_md": false 00:18:24.411 }, 00:18:24.411 "driver_specific": { 00:18:24.411 "lvol": { 00:18:24.411 "lvol_store_uuid": "d0f2139d-20c7-4569-b7e8-0f5e9ecbf3b4", 00:18:24.411 "base_bdev": "aio_bdev", 00:18:24.411 "thin_provision": false, 00:18:24.411 "num_allocated_clusters": 38, 00:18:24.411 "snapshot": false, 00:18:24.411 "clone": false, 00:18:24.411 "esnap_clone": false 00:18:24.411 } 00:18:24.411 } 00:18:24.411 } 00:18:24.411 ] 00:18:24.411 10:27:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:18:24.411 10:27:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0f2139d-20c7-4569-b7e8-0f5e9ecbf3b4 00:18:24.411 10:27:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:18:24.670 10:27:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:18:24.670 10:27:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0f2139d-20c7-4569-b7e8-0f5e9ecbf3b4 00:18:24.670 10:27:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:18:24.929 10:27:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:18:24.929 10:27:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 47477f46-e568-4bd4-921b-c3c65f953acf 00:18:24.929 10:27:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d0f2139d-20c7-4569-b7e8-0f5e9ecbf3b4 00:18:25.188 10:27:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:25.447 00:18:25.447 real 0m16.372s 00:18:25.447 user 0m42.414s 00:18:25.447 sys 0m3.656s 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:25.447 ************************************ 00:18:25.447 END TEST lvs_grow_dirty 00:18:25.447 ************************************ 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:25.447 nvmf_trace.0 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:25.447 rmmod nvme_tcp 00:18:25.447 rmmod nvme_fabrics 00:18:25.447 rmmod nvme_keyring 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2378298 ']' 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2378298 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 2378298 ']' 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 2378298 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:25.447 10:27:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2378298 00:18:25.707 10:27:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:25.707 10:27:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:25.707 10:27:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2378298' 00:18:25.707 killing process with pid 2378298 00:18:25.707 10:27:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 2378298 00:18:25.707 10:27:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 2378298 00:18:25.707 10:27:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:25.707 10:27:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:25.707 10:27:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:25.707 10:27:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:25.707 10:27:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:25.707 10:27:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.707 10:27:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:25.707 10:27:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.241 10:27:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:28.241 00:18:28.241 real 0m40.305s 00:18:28.241 user 1m2.124s 00:18:28.241 sys 0m9.858s 00:18:28.241 10:27:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:28.241 10:27:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:28.241 ************************************ 00:18:28.241 END TEST nvmf_lvs_grow 00:18:28.241 ************************************ 00:18:28.241 10:27:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:28.241 10:27:12 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:28.241 10:27:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:28.241 10:27:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:28.241 10:27:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:28.241 ************************************ 00:18:28.241 START TEST nvmf_bdev_io_wait 00:18:28.241 ************************************ 00:18:28.241 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:28.241 * Looking for test storage... 00:18:28.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:28.241 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:28.241 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:18:28.241 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:28.241 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:18:28.242 10:27:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:33.525 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:33.525 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.525 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:33.526 Found net devices under 0000:86:00.0: cvl_0_0 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:33.526 Found net devices under 0000:86:00.1: cvl_0_1 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:33.526 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:33.785 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:33.785 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:33.785 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:33.785 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:33.785 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:33.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:33.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:18:33.785 00:18:33.785 --- 10.0.0.2 ping statistics --- 00:18:33.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.785 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:18:33.785 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:33.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:33.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:18:33.785 00:18:33.785 --- 10.0.0.1 ping statistics --- 00:18:33.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.785 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:18:33.785 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:33.785 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:18:33.785 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:33.785 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:33.785 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:33.785 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:33.785 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:33.785 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:33.785 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:33.785 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:33.785 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:33.785 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:33.785 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:33.785 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2382737 00:18:33.785 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2382737 00:18:33.785 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:33.785 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 2382737 ']' 00:18:33.785 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.785 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:33.785 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.785 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:33.785 10:27:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:33.785 [2024-07-14 10:27:18.712439] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:33.785 [2024-07-14 10:27:18.712481] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.785 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.044 [2024-07-14 10:27:18.783790] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:34.044 [2024-07-14 10:27:18.826474] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.044 [2024-07-14 10:27:18.826513] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.044 [2024-07-14 10:27:18.826520] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.045 [2024-07-14 10:27:18.826525] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.045 [2024-07-14 10:27:18.826530] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.045 [2024-07-14 10:27:18.826588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.045 [2024-07-14 10:27:18.826695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.045 [2024-07-14 10:27:18.826804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.045 [2024-07-14 10:27:18.826805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:34.614 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:34.614 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:18:34.614 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:34.614 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:34.614 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:34.614 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.614 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:34.614 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.614 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:34.614 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.614 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:34.614 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.614 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:34.873 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:34.874 [2024-07-14 10:27:19.636974] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:34.874 Malloc0 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:34.874 [2024-07-14 10:27:19.692988] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2382982 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2382984 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:34.874 { 00:18:34.874 "params": { 00:18:34.874 "name": "Nvme$subsystem", 00:18:34.874 "trtype": "$TEST_TRANSPORT", 00:18:34.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:34.874 "adrfam": "ipv4", 00:18:34.874 "trsvcid": "$NVMF_PORT", 00:18:34.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:34.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:34.874 "hdgst": ${hdgst:-false}, 00:18:34.874 "ddgst": ${ddgst:-false} 00:18:34.874 }, 00:18:34.874 "method": "bdev_nvme_attach_controller" 00:18:34.874 } 00:18:34.874 EOF 00:18:34.874 )") 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2382986 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:34.874 { 00:18:34.874 "params": { 00:18:34.874 "name": "Nvme$subsystem", 00:18:34.874 "trtype": "$TEST_TRANSPORT", 00:18:34.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:34.874 "adrfam": "ipv4", 00:18:34.874 "trsvcid": "$NVMF_PORT", 00:18:34.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:34.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:34.874 "hdgst": ${hdgst:-false}, 00:18:34.874 "ddgst": ${ddgst:-false} 00:18:34.874 }, 00:18:34.874 "method": "bdev_nvme_attach_controller" 00:18:34.874 } 00:18:34.874 EOF 00:18:34.874 )") 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2382989 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:34.874 { 00:18:34.874 "params": { 00:18:34.874 "name": "Nvme$subsystem", 00:18:34.874 "trtype": "$TEST_TRANSPORT", 00:18:34.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:34.874 "adrfam": "ipv4", 00:18:34.874 "trsvcid": "$NVMF_PORT", 00:18:34.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:34.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:34.874 "hdgst": ${hdgst:-false}, 00:18:34.874 "ddgst": ${ddgst:-false} 00:18:34.874 }, 00:18:34.874 "method": "bdev_nvme_attach_controller" 00:18:34.874 } 00:18:34.874 EOF 00:18:34.874 )") 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:34.874 { 00:18:34.874 "params": { 00:18:34.874 "name": "Nvme$subsystem", 00:18:34.874 "trtype": "$TEST_TRANSPORT", 00:18:34.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:34.874 "adrfam": "ipv4", 00:18:34.874 "trsvcid": "$NVMF_PORT", 00:18:34.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:34.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:34.874 "hdgst": ${hdgst:-false}, 00:18:34.874 "ddgst": ${ddgst:-false} 00:18:34.874 }, 00:18:34.874 "method": "bdev_nvme_attach_controller" 00:18:34.874 } 00:18:34.874 EOF 00:18:34.874 )") 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2382982 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:34.874 "params": { 00:18:34.874 "name": "Nvme1", 00:18:34.874 "trtype": "tcp", 00:18:34.874 "traddr": "10.0.0.2", 00:18:34.874 "adrfam": "ipv4", 00:18:34.874 "trsvcid": "4420", 00:18:34.874 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.874 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:34.874 "hdgst": false, 00:18:34.874 "ddgst": false 00:18:34.874 }, 00:18:34.874 "method": "bdev_nvme_attach_controller" 00:18:34.874 }' 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:34.874 "params": { 00:18:34.874 "name": "Nvme1", 00:18:34.874 "trtype": "tcp", 00:18:34.874 "traddr": "10.0.0.2", 00:18:34.874 "adrfam": "ipv4", 00:18:34.874 "trsvcid": "4420", 00:18:34.874 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.874 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:34.874 "hdgst": false, 00:18:34.874 "ddgst": false 00:18:34.874 }, 00:18:34.874 "method": "bdev_nvme_attach_controller" 00:18:34.874 }' 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:34.874 "params": { 00:18:34.874 "name": "Nvme1", 00:18:34.874 "trtype": "tcp", 00:18:34.874 "traddr": "10.0.0.2", 00:18:34.874 "adrfam": "ipv4", 00:18:34.874 "trsvcid": "4420", 00:18:34.874 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.874 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:34.874 "hdgst": false, 00:18:34.874 "ddgst": false 00:18:34.874 }, 00:18:34.874 "method": "bdev_nvme_attach_controller" 00:18:34.874 }' 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:34.874 10:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:34.874 "params": { 00:18:34.874 "name": "Nvme1", 00:18:34.874 "trtype": "tcp", 00:18:34.874 "traddr": "10.0.0.2", 00:18:34.874 "adrfam": "ipv4", 00:18:34.874 "trsvcid": "4420", 00:18:34.874 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.874 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:34.875 "hdgst": false, 00:18:34.875 "ddgst": false 00:18:34.875 }, 00:18:34.875 "method": "bdev_nvme_attach_controller" 00:18:34.875 }' 00:18:34.875 [2024-07-14 10:27:19.743794] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:34.875 [2024-07-14 10:27:19.743844] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:34.875 [2024-07-14 10:27:19.745280] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:34.875 [2024-07-14 10:27:19.745323] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:34.875 [2024-07-14 10:27:19.746183] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:34.875 [2024-07-14 10:27:19.746239] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:34.875 [2024-07-14 10:27:19.746606] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:34.875 [2024-07-14 10:27:19.746644] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:34.875 EAL: No free 2048 kB hugepages reported on node 1 00:18:35.133 EAL: No free 2048 kB hugepages reported on node 1 00:18:35.133 [2024-07-14 10:27:19.924169] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.133 [2024-07-14 10:27:19.951870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:35.133 EAL: No free 2048 kB hugepages reported on node 1 00:18:35.134 [2024-07-14 10:27:20.016324] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.134 [2024-07-14 10:27:20.043383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:35.134 EAL: No free 2048 kB hugepages reported on node 1 00:18:35.134 [2024-07-14 10:27:20.110280] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.392 [2024-07-14 10:27:20.138003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:18:35.392 [2024-07-14 10:27:20.214826] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.392 [2024-07-14 10:27:20.247149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:35.651 Running I/O for 1 seconds... 00:18:35.651 Running I/O for 1 seconds... 00:18:35.651 Running I/O for 1 seconds... 00:18:35.651 Running I/O for 1 seconds... 00:18:36.646 00:18:36.647 Latency(us) 00:18:36.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.647 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:36.647 Nvme1n1 : 1.01 12445.53 48.62 0.00 0.00 10246.15 6268.66 17780.20 00:18:36.647 =================================================================================================================== 00:18:36.647 Total : 12445.53 48.62 0.00 0.00 10246.15 6268.66 17780.20 00:18:36.647 00:18:36.647 Latency(us) 00:18:36.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.647 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:36.647 Nvme1n1 : 1.00 246401.66 962.51 0.00 0.00 517.09 211.92 648.24 00:18:36.647 =================================================================================================================== 00:18:36.647 Total : 246401.66 962.51 0.00 0.00 517.09 211.92 648.24 00:18:36.647 00:18:36.647 Latency(us) 00:18:36.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.647 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:36.647 Nvme1n1 : 1.01 11251.43 43.95 0.00 0.00 11341.00 1759.50 14360.93 00:18:36.647 =================================================================================================================== 00:18:36.647 Total : 11251.43 43.95 0.00 0.00 11341.00 1759.50 14360.93 00:18:36.647 00:18:36.647 Latency(us) 00:18:36.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.647 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:36.647 Nvme1n1 : 1.01 9939.98 38.83 0.00 0.00 12839.63 5128.90 25644.52 00:18:36.647 =================================================================================================================== 00:18:36.647 Total : 9939.98 38.83 0.00 0.00 12839.63 5128.90 25644.52 00:18:36.906 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2382984 00:18:36.906 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2382986 00:18:36.906 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2382989 00:18:36.906 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:36.906 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.906 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:36.906 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.906 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:36.906 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:36.906 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:36.906 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:18:36.906 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:36.906 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:18:36.906 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:36.906 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:36.906 rmmod nvme_tcp 00:18:36.906 rmmod nvme_fabrics 00:18:36.906 rmmod nvme_keyring 00:18:36.906 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:36.906 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:18:36.906 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:18:36.906 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2382737 ']' 00:18:36.906 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2382737 00:18:36.906 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 2382737 ']' 00:18:36.906 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 2382737 00:18:36.906 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:18:36.906 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:36.906 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2382737 00:18:37.164 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:37.164 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:37.164 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2382737' 00:18:37.164 killing process with pid 2382737 00:18:37.164 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 2382737 00:18:37.164 10:27:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 2382737 00:18:37.164 10:27:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:37.164 10:27:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:37.164 10:27:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:37.164 10:27:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:37.164 10:27:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:37.164 10:27:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.164 10:27:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:37.164 10:27:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.698 10:27:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:39.698 00:18:39.698 real 0m11.402s 00:18:39.698 user 0m19.866s 00:18:39.698 sys 0m6.162s 00:18:39.698 10:27:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:39.698 10:27:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:39.698 ************************************ 00:18:39.698 END TEST nvmf_bdev_io_wait 00:18:39.698 ************************************ 00:18:39.698 10:27:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:39.698 10:27:24 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:39.698 10:27:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:39.698 10:27:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:39.698 10:27:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:39.698 ************************************ 00:18:39.698 START TEST nvmf_queue_depth 00:18:39.698 ************************************ 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:39.698 * Looking for test storage... 00:18:39.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:39.698 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:39.699 10:27:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:18:39.699 10:27:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:44.975 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:44.975 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:44.975 Found net devices under 0000:86:00.0: cvl_0_0 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.975 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:44.976 Found net devices under 0000:86:00.1: cvl_0_1 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:44.976 10:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:45.235 10:27:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:45.235 10:27:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:45.235 10:27:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:45.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:18:45.235 00:18:45.235 --- 10.0.0.2 ping statistics --- 00:18:45.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.235 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:18:45.235 10:27:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:45.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:18:45.235 00:18:45.235 --- 10.0.0.1 ping statistics --- 00:18:45.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.235 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:18:45.235 10:27:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.235 10:27:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:18:45.235 10:27:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:45.235 10:27:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.235 10:27:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:45.235 10:27:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:45.235 10:27:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.235 10:27:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:45.235 10:27:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:45.235 10:27:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:45.235 10:27:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:45.235 10:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:45.235 10:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:45.235 10:27:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2386763 00:18:45.235 10:27:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2386763 00:18:45.235 10:27:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:45.235 10:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2386763 ']' 00:18:45.235 10:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.235 10:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:45.235 10:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.235 10:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:45.235 10:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:45.235 [2024-07-14 10:27:30.153572] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:45.235 [2024-07-14 10:27:30.153615] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.235 EAL: No free 2048 kB hugepages reported on node 1 00:18:45.494 [2024-07-14 10:27:30.222380] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.494 [2024-07-14 10:27:30.262197] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.494 [2024-07-14 10:27:30.262239] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.494 [2024-07-14 10:27:30.262246] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.495 [2024-07-14 10:27:30.262253] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.495 [2024-07-14 10:27:30.262262] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.495 [2024-07-14 10:27:30.262282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.063 10:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:46.063 10:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:18:46.063 10:27:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:46.063 10:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:46.063 10:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:46.063 10:27:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.063 10:27:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:46.063 10:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.063 10:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:46.063 [2024-07-14 10:27:30.988785] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:46.063 10:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.063 10:27:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:46.063 10:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.063 10:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:46.063 Malloc0 00:18:46.063 10:27:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.063 10:27:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:46.063 10:27:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.063 10:27:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:46.063 10:27:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.063 10:27:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:46.063 10:27:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.063 10:27:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:46.321 10:27:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.321 10:27:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:46.321 10:27:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.321 10:27:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:46.321 [2024-07-14 10:27:31.052161] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:46.321 10:27:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.321 10:27:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2387008 00:18:46.321 10:27:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:46.321 10:27:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:46.321 10:27:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2387008 /var/tmp/bdevperf.sock 00:18:46.321 10:27:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2387008 ']' 00:18:46.321 10:27:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:46.321 10:27:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:46.321 10:27:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:46.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:46.321 10:27:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:46.322 10:27:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:46.322 [2024-07-14 10:27:31.101854] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:46.322 [2024-07-14 10:27:31.101894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2387008 ] 00:18:46.322 EAL: No free 2048 kB hugepages reported on node 1 00:18:46.322 [2024-07-14 10:27:31.170797] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.322 [2024-07-14 10:27:31.211706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.322 10:27:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:46.322 10:27:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:18:46.322 10:27:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:46.322 10:27:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.322 10:27:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:46.580 NVMe0n1 00:18:46.580 10:27:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.580 10:27:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:46.839 Running I/O for 10 seconds... 00:18:56.819 00:18:56.819 Latency(us) 00:18:56.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.819 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:56.819 Verification LBA range: start 0x0 length 0x4000 00:18:56.819 NVMe0n1 : 10.05 12317.92 48.12 0.00 0.00 82877.96 13050.21 54708.31 00:18:56.819 =================================================================================================================== 00:18:56.819 Total : 12317.92 48.12 0.00 0.00 82877.96 13050.21 54708.31 00:18:56.819 0 00:18:56.819 10:27:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2387008 00:18:56.819 10:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2387008 ']' 00:18:56.819 10:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2387008 00:18:56.819 10:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:56.819 10:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:56.819 10:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2387008 00:18:56.819 10:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:56.819 10:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:56.819 10:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2387008' 00:18:56.819 killing process with pid 2387008 00:18:56.819 10:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2387008 00:18:56.819 Received shutdown signal, test time was about 10.000000 seconds 00:18:56.819 00:18:56.819 Latency(us) 00:18:56.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.819 =================================================================================================================== 00:18:56.819 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:56.819 10:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2387008 00:18:57.079 10:27:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:57.080 10:27:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:57.080 10:27:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:57.080 10:27:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:57.080 10:27:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:57.080 10:27:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:57.080 10:27:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:57.080 10:27:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:57.080 rmmod nvme_tcp 00:18:57.080 rmmod nvme_fabrics 00:18:57.080 rmmod nvme_keyring 00:18:57.080 10:27:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:57.080 10:27:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:57.080 10:27:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:57.080 10:27:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2386763 ']' 00:18:57.080 10:27:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2386763 00:18:57.080 10:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2386763 ']' 00:18:57.080 10:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2386763 00:18:57.080 10:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:57.080 10:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:57.080 10:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2386763 00:18:57.080 10:27:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:57.080 10:27:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:57.080 10:27:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2386763' 00:18:57.080 killing process with pid 2386763 00:18:57.080 10:27:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2386763 00:18:57.080 10:27:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2386763 00:18:57.339 10:27:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:57.339 10:27:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:57.339 10:27:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:57.339 10:27:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:57.339 10:27:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:57.339 10:27:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.339 10:27:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:57.339 10:27:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.877 10:27:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:59.877 00:18:59.877 real 0m20.065s 00:18:59.877 user 0m23.739s 00:18:59.877 sys 0m5.861s 00:18:59.877 10:27:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:59.877 10:27:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:59.877 ************************************ 00:18:59.877 END TEST nvmf_queue_depth 00:18:59.877 ************************************ 00:18:59.877 10:27:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:59.877 10:27:44 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:59.877 10:27:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:59.877 10:27:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:59.877 10:27:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:59.877 ************************************ 00:18:59.877 START TEST nvmf_target_multipath 00:18:59.877 ************************************ 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:59.877 * Looking for test storage... 00:18:59.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.877 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.878 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:59.878 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:59.878 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:59.878 10:27:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:59.878 10:27:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:59.878 10:27:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:59.878 10:27:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:59.878 10:27:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:59.878 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:59.878 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:59.878 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:59.878 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:59.878 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:59.878 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.878 10:27:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:59.878 10:27:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.878 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:59.878 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:59.878 10:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:59.878 10:27:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:05.151 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:05.151 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:05.151 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:05.152 Found net devices under 0000:86:00.0: cvl_0_0 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:05.152 Found net devices under 0000:86:00.1: cvl_0_1 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:05.152 10:27:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:05.152 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:05.152 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:05.152 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:05.152 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:05.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:05.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:19:05.463 00:19:05.463 --- 10.0.0.2 ping statistics --- 00:19:05.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.463 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:05.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:05.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:19:05.463 00:19:05.463 --- 10.0.0.1 ping statistics --- 00:19:05.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.463 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:19:05.463 only one NIC for nvmf test 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:05.463 rmmod nvme_tcp 00:19:05.463 rmmod nvme_fabrics 00:19:05.463 rmmod nvme_keyring 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:05.463 10:27:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.368 10:27:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:07.368 10:27:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:19:07.368 10:27:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:19:07.368 10:27:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:07.368 10:27:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:19:07.628 10:27:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:07.628 10:27:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:19:07.628 10:27:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:07.628 10:27:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:07.628 10:27:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:07.628 10:27:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:19:07.628 10:27:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:19:07.628 10:27:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:07.628 10:27:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:07.628 10:27:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:07.628 10:27:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:07.628 10:27:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:07.628 10:27:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:07.628 10:27:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.628 10:27:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.628 10:27:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.628 10:27:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:07.628 00:19:07.628 real 0m8.030s 00:19:07.628 user 0m1.625s 00:19:07.628 sys 0m4.401s 00:19:07.628 10:27:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:07.628 10:27:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:07.628 ************************************ 00:19:07.628 END TEST nvmf_target_multipath 00:19:07.628 ************************************ 00:19:07.628 10:27:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:07.628 10:27:52 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:07.628 10:27:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:07.628 10:27:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:07.628 10:27:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:07.628 ************************************ 00:19:07.628 START TEST nvmf_zcopy 00:19:07.628 ************************************ 00:19:07.628 10:27:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:07.628 * Looking for test storage... 00:19:07.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:07.628 10:27:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:07.628 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:19:07.628 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.628 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.628 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.628 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.628 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.628 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.628 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.628 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.628 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.628 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.628 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:07.628 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:07.628 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.628 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.628 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:07.628 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:07.628 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:07.628 10:27:52 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.628 10:27:52 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.629 10:27:52 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.629 10:27:52 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.629 10:27:52 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.629 10:27:52 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.629 10:27:52 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:19:07.629 10:27:52 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.629 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:19:07.629 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:07.629 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:07.629 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:07.629 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.629 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.629 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:07.629 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:07.629 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:07.629 10:27:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:19:07.629 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:07.629 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.629 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:07.629 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:07.629 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:07.629 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.629 10:27:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.629 10:27:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.629 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:07.629 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:07.629 10:27:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:19:07.629 10:27:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:14.200 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:14.200 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:14.200 Found net devices under 0000:86:00.0: cvl_0_0 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:14.200 Found net devices under 0000:86:00.1: cvl_0_1 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:14.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:14.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:19:14.200 00:19:14.200 --- 10.0.0.2 ping statistics --- 00:19:14.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.200 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:14.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:14.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:19:14.200 00:19:14.200 --- 10.0.0.1 ping statistics --- 00:19:14.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.200 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2395644 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2395644 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 2395644 ']' 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:14.200 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:14.200 [2024-07-14 10:27:58.341749] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:19:14.201 [2024-07-14 10:27:58.341790] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.201 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.201 [2024-07-14 10:27:58.412715] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.201 [2024-07-14 10:27:58.452284] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.201 [2024-07-14 10:27:58.452325] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.201 [2024-07-14 10:27:58.452333] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:14.201 [2024-07-14 10:27:58.452341] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:14.201 [2024-07-14 10:27:58.452347] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.201 [2024-07-14 10:27:58.452364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:14.201 [2024-07-14 10:27:58.588935] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:14.201 [2024-07-14 10:27:58.609107] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:14.201 malloc0 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:14.201 { 00:19:14.201 "params": { 00:19:14.201 "name": "Nvme$subsystem", 00:19:14.201 "trtype": "$TEST_TRANSPORT", 00:19:14.201 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:14.201 "adrfam": "ipv4", 00:19:14.201 "trsvcid": "$NVMF_PORT", 00:19:14.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:14.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:14.201 "hdgst": ${hdgst:-false}, 00:19:14.201 "ddgst": ${ddgst:-false} 00:19:14.201 }, 00:19:14.201 "method": "bdev_nvme_attach_controller" 00:19:14.201 } 00:19:14.201 EOF 00:19:14.201 )") 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:19:14.201 10:27:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:14.201 "params": { 00:19:14.201 "name": "Nvme1", 00:19:14.201 "trtype": "tcp", 00:19:14.201 "traddr": "10.0.0.2", 00:19:14.201 "adrfam": "ipv4", 00:19:14.201 "trsvcid": "4420", 00:19:14.201 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.201 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.201 "hdgst": false, 00:19:14.201 "ddgst": false 00:19:14.201 }, 00:19:14.201 "method": "bdev_nvme_attach_controller" 00:19:14.201 }' 00:19:14.201 [2024-07-14 10:27:58.689017] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:19:14.201 [2024-07-14 10:27:58.689061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2395671 ] 00:19:14.201 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.201 [2024-07-14 10:27:58.758422] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.201 [2024-07-14 10:27:58.800014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.201 Running I/O for 10 seconds... 00:19:24.184 00:19:24.184 Latency(us) 00:19:24.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.184 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:19:24.184 Verification LBA range: start 0x0 length 0x1000 00:19:24.184 Nvme1n1 : 10.01 8711.91 68.06 0.00 0.00 14649.82 1816.49 25188.62 00:19:24.184 =================================================================================================================== 00:19:24.184 Total : 8711.91 68.06 0.00 0.00 14649.82 1816.49 25188.62 00:19:24.444 10:28:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2397419 00:19:24.445 10:28:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:19:24.445 10:28:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:24.445 10:28:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:19:24.445 10:28:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:19:24.445 10:28:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:19:24.445 10:28:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:19:24.445 10:28:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:24.445 10:28:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:24.445 { 00:19:24.445 "params": { 00:19:24.445 "name": "Nvme$subsystem", 00:19:24.445 "trtype": "$TEST_TRANSPORT", 00:19:24.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:24.445 "adrfam": "ipv4", 00:19:24.445 "trsvcid": "$NVMF_PORT", 00:19:24.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:24.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:24.445 "hdgst": ${hdgst:-false}, 00:19:24.445 "ddgst": ${ddgst:-false} 00:19:24.445 }, 00:19:24.445 "method": "bdev_nvme_attach_controller" 00:19:24.445 } 00:19:24.445 EOF 00:19:24.445 )") 00:19:24.445 10:28:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:19:24.445 [2024-07-14 10:28:09.253457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.445 [2024-07-14 10:28:09.253492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.445 10:28:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:19:24.445 10:28:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:19:24.445 10:28:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:24.445 "params": { 00:19:24.445 "name": "Nvme1", 00:19:24.445 "trtype": "tcp", 00:19:24.445 "traddr": "10.0.0.2", 00:19:24.445 "adrfam": "ipv4", 00:19:24.445 "trsvcid": "4420", 00:19:24.445 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.445 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:24.445 "hdgst": false, 00:19:24.445 "ddgst": false 00:19:24.445 }, 00:19:24.445 "method": "bdev_nvme_attach_controller" 00:19:24.445 }' 00:19:24.445 [2024-07-14 10:28:09.265453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.445 [2024-07-14 10:28:09.265466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.445 [2024-07-14 10:28:09.273466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.445 [2024-07-14 10:28:09.273477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.445 [2024-07-14 10:28:09.285500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.445 [2024-07-14 10:28:09.285515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.445 [2024-07-14 10:28:09.290622] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:19:24.445 [2024-07-14 10:28:09.290664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2397419 ] 00:19:24.445 [2024-07-14 10:28:09.297534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.445 [2024-07-14 10:28:09.297545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.445 [2024-07-14 10:28:09.309564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.445 [2024-07-14 10:28:09.309575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.445 EAL: No free 2048 kB hugepages reported on node 1 00:19:24.445 [2024-07-14 10:28:09.321597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.445 [2024-07-14 10:28:09.321607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.445 [2024-07-14 10:28:09.333631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.445 [2024-07-14 10:28:09.333643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.445 [2024-07-14 10:28:09.345662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.445 [2024-07-14 10:28:09.345674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.445 [2024-07-14 10:28:09.357696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.445 [2024-07-14 10:28:09.357707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.445 [2024-07-14 10:28:09.358700] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.445 [2024-07-14 10:28:09.369732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.445 [2024-07-14 10:28:09.369745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.445 [2024-07-14 10:28:09.381780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.445 [2024-07-14 10:28:09.381809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.445 [2024-07-14 10:28:09.393800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.445 [2024-07-14 10:28:09.393813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.445 [2024-07-14 10:28:09.398984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.445 [2024-07-14 10:28:09.405828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.445 [2024-07-14 10:28:09.405841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.445 [2024-07-14 10:28:09.417872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.445 [2024-07-14 10:28:09.417892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.704 [2024-07-14 10:28:09.429900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.704 [2024-07-14 10:28:09.429915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.704 [2024-07-14 10:28:09.441924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.704 [2024-07-14 10:28:09.441937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.704 [2024-07-14 10:28:09.453956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.704 [2024-07-14 10:28:09.453968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.704 [2024-07-14 10:28:09.465999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.704 [2024-07-14 10:28:09.466009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.704 [2024-07-14 10:28:09.478015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.704 [2024-07-14 10:28:09.478029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.704 [2024-07-14 10:28:09.490060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.704 [2024-07-14 10:28:09.490079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.704 [2024-07-14 10:28:09.502083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.704 [2024-07-14 10:28:09.502096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.705 [2024-07-14 10:28:09.514115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.705 [2024-07-14 10:28:09.514128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.705 [2024-07-14 10:28:09.526144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.705 [2024-07-14 10:28:09.526153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.705 [2024-07-14 10:28:09.538178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.705 [2024-07-14 10:28:09.538188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.705 [2024-07-14 10:28:09.550212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.705 [2024-07-14 10:28:09.550229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.705 [2024-07-14 10:28:09.562251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.705 [2024-07-14 10:28:09.562266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.705 [2024-07-14 10:28:09.574285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.705 [2024-07-14 10:28:09.574298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.705 [2024-07-14 10:28:09.622516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.705 [2024-07-14 10:28:09.622533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.705 Running I/O for 5 seconds... 00:19:24.705 [2024-07-14 10:28:09.634456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.705 [2024-07-14 10:28:09.634468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.705 [2024-07-14 10:28:09.646963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.705 [2024-07-14 10:28:09.646983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.705 [2024-07-14 10:28:09.657777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.705 [2024-07-14 10:28:09.657798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.705 [2024-07-14 10:28:09.672201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.705 [2024-07-14 10:28:09.672221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.705 [2024-07-14 10:28:09.686191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.705 [2024-07-14 10:28:09.686212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.964 [2024-07-14 10:28:09.697003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.964 [2024-07-14 10:28:09.697022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.964 [2024-07-14 10:28:09.711368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.964 [2024-07-14 10:28:09.711386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.964 [2024-07-14 10:28:09.725121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.964 [2024-07-14 10:28:09.725139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.964 [2024-07-14 10:28:09.734154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.964 [2024-07-14 10:28:09.734173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.964 [2024-07-14 10:28:09.748614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.964 [2024-07-14 10:28:09.748636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.964 [2024-07-14 10:28:09.757895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.964 [2024-07-14 10:28:09.757913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.964 [2024-07-14 10:28:09.772613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.964 [2024-07-14 10:28:09.772633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.964 [2024-07-14 10:28:09.786722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.964 [2024-07-14 10:28:09.786740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.964 [2024-07-14 10:28:09.795744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.964 [2024-07-14 10:28:09.795762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.964 [2024-07-14 10:28:09.804755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.964 [2024-07-14 10:28:09.804774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.964 [2024-07-14 10:28:09.819498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.964 [2024-07-14 10:28:09.819517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.964 [2024-07-14 10:28:09.830296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.964 [2024-07-14 10:28:09.830314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.964 [2024-07-14 10:28:09.844403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.964 [2024-07-14 10:28:09.844422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.964 [2024-07-14 10:28:09.857777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.964 [2024-07-14 10:28:09.857795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.964 [2024-07-14 10:28:09.871956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.964 [2024-07-14 10:28:09.871975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.964 [2024-07-14 10:28:09.885875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.964 [2024-07-14 10:28:09.885894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.964 [2024-07-14 10:28:09.900006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.964 [2024-07-14 10:28:09.900025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.964 [2024-07-14 10:28:09.913837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.964 [2024-07-14 10:28:09.913856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.964 [2024-07-14 10:28:09.927810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.964 [2024-07-14 10:28:09.927829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.964 [2024-07-14 10:28:09.941423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.964 [2024-07-14 10:28:09.941442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.224 [2024-07-14 10:28:09.950552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.224 [2024-07-14 10:28:09.950572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.224 [2024-07-14 10:28:09.965170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.224 [2024-07-14 10:28:09.965189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.224 [2024-07-14 10:28:09.978917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.224 [2024-07-14 10:28:09.978936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.224 [2024-07-14 10:28:09.992895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.224 [2024-07-14 10:28:09.992913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.224 [2024-07-14 10:28:10.002124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.224 [2024-07-14 10:28:10.002143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.224 [2024-07-14 10:28:10.011138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.224 [2024-07-14 10:28:10.011157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.224 [2024-07-14 10:28:10.025926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.224 [2024-07-14 10:28:10.025946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.224 [2024-07-14 10:28:10.040345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.224 [2024-07-14 10:28:10.040366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.224 [2024-07-14 10:28:10.054950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.224 [2024-07-14 10:28:10.054969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.224 [2024-07-14 10:28:10.066314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.224 [2024-07-14 10:28:10.066333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.224 [2024-07-14 10:28:10.075168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.224 [2024-07-14 10:28:10.075187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.224 [2024-07-14 10:28:10.083916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.224 [2024-07-14 10:28:10.083935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.224 [2024-07-14 10:28:10.098172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.224 [2024-07-14 10:28:10.098191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.224 [2024-07-14 10:28:10.107034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.224 [2024-07-14 10:28:10.107052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.224 [2024-07-14 10:28:10.116501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.224 [2024-07-14 10:28:10.116521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.224 [2024-07-14 10:28:10.130608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.224 [2024-07-14 10:28:10.130627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.224 [2024-07-14 10:28:10.139750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.224 [2024-07-14 10:28:10.139769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.224 [2024-07-14 10:28:10.153553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.224 [2024-07-14 10:28:10.153572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.224 [2024-07-14 10:28:10.167154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.224 [2024-07-14 10:28:10.167173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.224 [2024-07-14 10:28:10.175814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.224 [2024-07-14 10:28:10.175832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.224 [2024-07-14 10:28:10.184940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.224 [2024-07-14 10:28:10.184958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.224 [2024-07-14 10:28:10.194065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.224 [2024-07-14 10:28:10.194083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.483 [2024-07-14 10:28:10.208353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.483 [2024-07-14 10:28:10.208373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.483 [2024-07-14 10:28:10.217267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.483 [2024-07-14 10:28:10.217286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.483 [2024-07-14 10:28:10.225876] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.483 [2024-07-14 10:28:10.225895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.483 [2024-07-14 10:28:10.234903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.483 [2024-07-14 10:28:10.234921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.483 [2024-07-14 10:28:10.244151] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.483 [2024-07-14 10:28:10.244169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.483 [2024-07-14 10:28:10.258402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.483 [2024-07-14 10:28:10.258421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.483 [2024-07-14 10:28:10.271422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.483 [2024-07-14 10:28:10.271442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.483 [2024-07-14 10:28:10.285406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.483 [2024-07-14 10:28:10.285424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.483 [2024-07-14 10:28:10.294312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.483 [2024-07-14 10:28:10.294331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.483 [2024-07-14 10:28:10.303022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.483 [2024-07-14 10:28:10.303040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.483 [2024-07-14 10:28:10.317509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.483 [2024-07-14 10:28:10.317528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.483 [2024-07-14 10:28:10.331684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.483 [2024-07-14 10:28:10.331702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.483 [2024-07-14 10:28:10.345363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.483 [2024-07-14 10:28:10.345381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.483 [2024-07-14 10:28:10.359484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.483 [2024-07-14 10:28:10.359505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.483 [2024-07-14 10:28:10.373133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.483 [2024-07-14 10:28:10.373153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.483 [2024-07-14 10:28:10.387426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.483 [2024-07-14 10:28:10.387446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.483 [2024-07-14 10:28:10.401413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.483 [2024-07-14 10:28:10.401434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.483 [2024-07-14 10:28:10.415449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.483 [2024-07-14 10:28:10.415470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.483 [2024-07-14 10:28:10.424376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.483 [2024-07-14 10:28:10.424396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.483 [2024-07-14 10:28:10.434348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.483 [2024-07-14 10:28:10.434368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.483 [2024-07-14 10:28:10.448113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.483 [2024-07-14 10:28:10.448133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.483 [2024-07-14 10:28:10.456927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.483 [2024-07-14 10:28:10.456947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.743 [2024-07-14 10:28:10.465726] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.743 [2024-07-14 10:28:10.465746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.743 [2024-07-14 10:28:10.475160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.743 [2024-07-14 10:28:10.475179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.743 [2024-07-14 10:28:10.489557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.743 [2024-07-14 10:28:10.489577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.743 [2024-07-14 10:28:10.503694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.743 [2024-07-14 10:28:10.503714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.743 [2024-07-14 10:28:10.512526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.743 [2024-07-14 10:28:10.512544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.743 [2024-07-14 10:28:10.521275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.743 [2024-07-14 10:28:10.521295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.743 [2024-07-14 10:28:10.530467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.743 [2024-07-14 10:28:10.530487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.743 [2024-07-14 10:28:10.540183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.743 [2024-07-14 10:28:10.540201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.743 [2024-07-14 10:28:10.554397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.743 [2024-07-14 10:28:10.554416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.743 [2024-07-14 10:28:10.567468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.743 [2024-07-14 10:28:10.567488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.743 [2024-07-14 10:28:10.576170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.743 [2024-07-14 10:28:10.576189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.743 [2024-07-14 10:28:10.585322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.743 [2024-07-14 10:28:10.585341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.743 [2024-07-14 10:28:10.599458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.743 [2024-07-14 10:28:10.599477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.743 [2024-07-14 10:28:10.613455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.743 [2024-07-14 10:28:10.613474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.743 [2024-07-14 10:28:10.627914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.743 [2024-07-14 10:28:10.627933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.743 [2024-07-14 10:28:10.643215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.743 [2024-07-14 10:28:10.643241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.743 [2024-07-14 10:28:10.657597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.743 [2024-07-14 10:28:10.657617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.743 [2024-07-14 10:28:10.670855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.743 [2024-07-14 10:28:10.670874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.743 [2024-07-14 10:28:10.684685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.743 [2024-07-14 10:28:10.684704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.743 [2024-07-14 10:28:10.693786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.743 [2024-07-14 10:28:10.693804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.743 [2024-07-14 10:28:10.708426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.743 [2024-07-14 10:28:10.708445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.743 [2024-07-14 10:28:10.717400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.743 [2024-07-14 10:28:10.717418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.003 [2024-07-14 10:28:10.731635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.003 [2024-07-14 10:28:10.731654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.003 [2024-07-14 10:28:10.745502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.003 [2024-07-14 10:28:10.745524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.003 [2024-07-14 10:28:10.754381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.003 [2024-07-14 10:28:10.754400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.003 [2024-07-14 10:28:10.763107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.003 [2024-07-14 10:28:10.763125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.003 [2024-07-14 10:28:10.772385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.003 [2024-07-14 10:28:10.772405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.003 [2024-07-14 10:28:10.787210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.003 [2024-07-14 10:28:10.787234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.003 [2024-07-14 10:28:10.802581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.003 [2024-07-14 10:28:10.802599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.003 [2024-07-14 10:28:10.816589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.003 [2024-07-14 10:28:10.816608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.003 [2024-07-14 10:28:10.830079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.003 [2024-07-14 10:28:10.830097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.003 [2024-07-14 10:28:10.844353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.003 [2024-07-14 10:28:10.844372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.003 [2024-07-14 10:28:10.858360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.003 [2024-07-14 10:28:10.858379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.003 [2024-07-14 10:28:10.872212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.003 [2024-07-14 10:28:10.872238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.003 [2024-07-14 10:28:10.885709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.003 [2024-07-14 10:28:10.885733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.003 [2024-07-14 10:28:10.894419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.003 [2024-07-14 10:28:10.894437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.003 [2024-07-14 10:28:10.903435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.003 [2024-07-14 10:28:10.903453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.003 [2024-07-14 10:28:10.912603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.003 [2024-07-14 10:28:10.912621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.003 [2024-07-14 10:28:10.927021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.003 [2024-07-14 10:28:10.927039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.003 [2024-07-14 10:28:10.941231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.003 [2024-07-14 10:28:10.941249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.003 [2024-07-14 10:28:10.952055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.003 [2024-07-14 10:28:10.952074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.003 [2024-07-14 10:28:10.966765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.003 [2024-07-14 10:28:10.966784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.003 [2024-07-14 10:28:10.977524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.003 [2024-07-14 10:28:10.977542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.262 [2024-07-14 10:28:10.991840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.262 [2024-07-14 10:28:10.991859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.262 [2024-07-14 10:28:11.005997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.262 [2024-07-14 10:28:11.006015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.262 [2024-07-14 10:28:11.016995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.262 [2024-07-14 10:28:11.017014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.262 [2024-07-14 10:28:11.025927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.262 [2024-07-14 10:28:11.025946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.262 [2024-07-14 10:28:11.034983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.262 [2024-07-14 10:28:11.035002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.262 [2024-07-14 10:28:11.049079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.262 [2024-07-14 10:28:11.049097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.262 [2024-07-14 10:28:11.062520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.262 [2024-07-14 10:28:11.062539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.262 [2024-07-14 10:28:11.077004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.262 [2024-07-14 10:28:11.077023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.262 [2024-07-14 10:28:11.091299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.262 [2024-07-14 10:28:11.091317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.262 [2024-07-14 10:28:11.102435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.262 [2024-07-14 10:28:11.102454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.262 [2024-07-14 10:28:11.116678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.262 [2024-07-14 10:28:11.116702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.262 [2024-07-14 10:28:11.130487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.262 [2024-07-14 10:28:11.130507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.262 [2024-07-14 10:28:11.144484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.262 [2024-07-14 10:28:11.144502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.262 [2024-07-14 10:28:11.158372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.262 [2024-07-14 10:28:11.158391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.262 [2024-07-14 10:28:11.172826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.262 [2024-07-14 10:28:11.172844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.263 [2024-07-14 10:28:11.187988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.263 [2024-07-14 10:28:11.188007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.263 [2024-07-14 10:28:11.201937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.263 [2024-07-14 10:28:11.201955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.263 [2024-07-14 10:28:11.215841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.263 [2024-07-14 10:28:11.215859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.263 [2024-07-14 10:28:11.230255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.263 [2024-07-14 10:28:11.230275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.263 [2024-07-14 10:28:11.241067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.263 [2024-07-14 10:28:11.241087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.521 [2024-07-14 10:28:11.255382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.521 [2024-07-14 10:28:11.255401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.521 [2024-07-14 10:28:11.268928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.521 [2024-07-14 10:28:11.268947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.521 [2024-07-14 10:28:11.282858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.521 [2024-07-14 10:28:11.282877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.521 [2024-07-14 10:28:11.291805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.521 [2024-07-14 10:28:11.291823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.521 [2024-07-14 10:28:11.301202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.521 [2024-07-14 10:28:11.301220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.521 [2024-07-14 10:28:11.315763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.521 [2024-07-14 10:28:11.315781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.521 [2024-07-14 10:28:11.329155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.521 [2024-07-14 10:28:11.329174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.521 [2024-07-14 10:28:11.338014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.521 [2024-07-14 10:28:11.338032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.521 [2024-07-14 10:28:11.352028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.521 [2024-07-14 10:28:11.352047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.521 [2024-07-14 10:28:11.365575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.521 [2024-07-14 10:28:11.365600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.521 [2024-07-14 10:28:11.379420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.521 [2024-07-14 10:28:11.379440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.521 [2024-07-14 10:28:11.393453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.521 [2024-07-14 10:28:11.393472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.521 [2024-07-14 10:28:11.407204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.521 [2024-07-14 10:28:11.407223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.521 [2024-07-14 10:28:11.416058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.521 [2024-07-14 10:28:11.416077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.521 [2024-07-14 10:28:11.424788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.521 [2024-07-14 10:28:11.424806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.521 [2024-07-14 10:28:11.439532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.521 [2024-07-14 10:28:11.439550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.521 [2024-07-14 10:28:11.448520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.521 [2024-07-14 10:28:11.448538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.522 [2024-07-14 10:28:11.462916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.522 [2024-07-14 10:28:11.462934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.522 [2024-07-14 10:28:11.471763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.522 [2024-07-14 10:28:11.471781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.522 [2024-07-14 10:28:11.481248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.522 [2024-07-14 10:28:11.481266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.522 [2024-07-14 10:28:11.495711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.522 [2024-07-14 10:28:11.495729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.781 [2024-07-14 10:28:11.509572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.781 [2024-07-14 10:28:11.509591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.781 [2024-07-14 10:28:11.523482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.781 [2024-07-14 10:28:11.523502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.781 [2024-07-14 10:28:11.537517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.781 [2024-07-14 10:28:11.537537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.781 [2024-07-14 10:28:11.551232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.781 [2024-07-14 10:28:11.551251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.781 [2024-07-14 10:28:11.565318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.781 [2024-07-14 10:28:11.565338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.781 [2024-07-14 10:28:11.579347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.781 [2024-07-14 10:28:11.579368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.781 [2024-07-14 10:28:11.590375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.781 [2024-07-14 10:28:11.590394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.781 [2024-07-14 10:28:11.604413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.781 [2024-07-14 10:28:11.604437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.781 [2024-07-14 10:28:11.613264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.781 [2024-07-14 10:28:11.613282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.781 [2024-07-14 10:28:11.627616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.781 [2024-07-14 10:28:11.627635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.781 [2024-07-14 10:28:11.641181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.781 [2024-07-14 10:28:11.641200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.781 [2024-07-14 10:28:11.650219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.781 [2024-07-14 10:28:11.650241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.781 [2024-07-14 10:28:11.664593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.781 [2024-07-14 10:28:11.664612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.781 [2024-07-14 10:28:11.673362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.781 [2024-07-14 10:28:11.673381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.781 [2024-07-14 10:28:11.687738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.781 [2024-07-14 10:28:11.687757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.781 [2024-07-14 10:28:11.696793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.781 [2024-07-14 10:28:11.696812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.781 [2024-07-14 10:28:11.710827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.781 [2024-07-14 10:28:11.710846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.781 [2024-07-14 10:28:11.719779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.781 [2024-07-14 10:28:11.719797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.781 [2024-07-14 10:28:11.729036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.781 [2024-07-14 10:28:11.729055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.781 [2024-07-14 10:28:11.743337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.781 [2024-07-14 10:28:11.743356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.781 [2024-07-14 10:28:11.752218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.781 [2024-07-14 10:28:11.752242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.040 [2024-07-14 10:28:11.766678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.040 [2024-07-14 10:28:11.766698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.040 [2024-07-14 10:28:11.780459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.040 [2024-07-14 10:28:11.780480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.040 [2024-07-14 10:28:11.789373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.040 [2024-07-14 10:28:11.789393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.040 [2024-07-14 10:28:11.803613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.040 [2024-07-14 10:28:11.803633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.040 [2024-07-14 10:28:11.817365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.040 [2024-07-14 10:28:11.817387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.040 [2024-07-14 10:28:11.830989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.040 [2024-07-14 10:28:11.831010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.040 [2024-07-14 10:28:11.844917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.040 [2024-07-14 10:28:11.844936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.040 [2024-07-14 10:28:11.858709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.040 [2024-07-14 10:28:11.858728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.040 [2024-07-14 10:28:11.872786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.040 [2024-07-14 10:28:11.872805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.040 [2024-07-14 10:28:11.886434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.040 [2024-07-14 10:28:11.886454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.040 [2024-07-14 10:28:11.900250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.040 [2024-07-14 10:28:11.900269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.040 [2024-07-14 10:28:11.913884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.040 [2024-07-14 10:28:11.913903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.040 [2024-07-14 10:28:11.922875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.040 [2024-07-14 10:28:11.922894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.040 [2024-07-14 10:28:11.937484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.040 [2024-07-14 10:28:11.937502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.040 [2024-07-14 10:28:11.951123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.040 [2024-07-14 10:28:11.951142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.040 [2024-07-14 10:28:11.960119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.040 [2024-07-14 10:28:11.960137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.040 [2024-07-14 10:28:11.974438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.040 [2024-07-14 10:28:11.974458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.040 [2024-07-14 10:28:11.987891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.040 [2024-07-14 10:28:11.987910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.040 [2024-07-14 10:28:12.002108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.040 [2024-07-14 10:28:12.002127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.040 [2024-07-14 10:28:12.015656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.040 [2024-07-14 10:28:12.015676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.299 [2024-07-14 10:28:12.024702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.300 [2024-07-14 10:28:12.024722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.300 [2024-07-14 10:28:12.033715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.300 [2024-07-14 10:28:12.033734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.300 [2024-07-14 10:28:12.043552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.300 [2024-07-14 10:28:12.043572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.300 [2024-07-14 10:28:12.057289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.300 [2024-07-14 10:28:12.057307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.300 [2024-07-14 10:28:12.066571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.300 [2024-07-14 10:28:12.066589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.300 [2024-07-14 10:28:12.075462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.300 [2024-07-14 10:28:12.075481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.300 [2024-07-14 10:28:12.084764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.300 [2024-07-14 10:28:12.084782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.300 [2024-07-14 10:28:12.094060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.300 [2024-07-14 10:28:12.094078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.300 [2024-07-14 10:28:12.108490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.300 [2024-07-14 10:28:12.108509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.300 [2024-07-14 10:28:12.122530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.300 [2024-07-14 10:28:12.122549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.300 [2024-07-14 10:28:12.131359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.300 [2024-07-14 10:28:12.131378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.300 [2024-07-14 10:28:12.140178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.300 [2024-07-14 10:28:12.140197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.300 [2024-07-14 10:28:12.149479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.300 [2024-07-14 10:28:12.149499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.300 [2024-07-14 10:28:12.163844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.300 [2024-07-14 10:28:12.163864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.300 [2024-07-14 10:28:12.177390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.300 [2024-07-14 10:28:12.177409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.300 [2024-07-14 10:28:12.190839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.300 [2024-07-14 10:28:12.190857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.300 [2024-07-14 10:28:12.204537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.300 [2024-07-14 10:28:12.204555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.300 [2024-07-14 10:28:12.218395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.300 [2024-07-14 10:28:12.218414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.300 [2024-07-14 10:28:12.232078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.300 [2024-07-14 10:28:12.232096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.300 [2024-07-14 10:28:12.240889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.300 [2024-07-14 10:28:12.240908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.300 [2024-07-14 10:28:12.249957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.300 [2024-07-14 10:28:12.249976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.300 [2024-07-14 10:28:12.259063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.300 [2024-07-14 10:28:12.259081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.300 [2024-07-14 10:28:12.268419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.300 [2024-07-14 10:28:12.268437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.560 [2024-07-14 10:28:12.283099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.560 [2024-07-14 10:28:12.283119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.560 [2024-07-14 10:28:12.296846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.560 [2024-07-14 10:28:12.296865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.560 [2024-07-14 10:28:12.306242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.560 [2024-07-14 10:28:12.306260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.560 [2024-07-14 10:28:12.320634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.560 [2024-07-14 10:28:12.320652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.560 [2024-07-14 10:28:12.329237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.560 [2024-07-14 10:28:12.329255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.560 [2024-07-14 10:28:12.343328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.560 [2024-07-14 10:28:12.343347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.560 [2024-07-14 10:28:12.357391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.560 [2024-07-14 10:28:12.357409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.560 [2024-07-14 10:28:12.371036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.560 [2024-07-14 10:28:12.371058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.560 [2024-07-14 10:28:12.380071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.560 [2024-07-14 10:28:12.380089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.560 [2024-07-14 10:28:12.388928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.560 [2024-07-14 10:28:12.388948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.560 [2024-07-14 10:28:12.403467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.560 [2024-07-14 10:28:12.403487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.560 [2024-07-14 10:28:12.417060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.560 [2024-07-14 10:28:12.417079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.560 [2024-07-14 10:28:12.425727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.560 [2024-07-14 10:28:12.425746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.560 [2024-07-14 10:28:12.440135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.560 [2024-07-14 10:28:12.440153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.560 [2024-07-14 10:28:12.449070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.560 [2024-07-14 10:28:12.449089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.560 [2024-07-14 10:28:12.463405] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.560 [2024-07-14 10:28:12.463424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.560 [2024-07-14 10:28:12.477510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.560 [2024-07-14 10:28:12.477528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.560 [2024-07-14 10:28:12.488352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.560 [2024-07-14 10:28:12.488371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.560 [2024-07-14 10:28:12.502458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.560 [2024-07-14 10:28:12.502477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.560 [2024-07-14 10:28:12.511614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.560 [2024-07-14 10:28:12.511632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.560 [2024-07-14 10:28:12.525734] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.560 [2024-07-14 10:28:12.525753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.560 [2024-07-14 10:28:12.534533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.560 [2024-07-14 10:28:12.534552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.818 [2024-07-14 10:28:12.548887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.819 [2024-07-14 10:28:12.548906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.819 [2024-07-14 10:28:12.562486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.819 [2024-07-14 10:28:12.562505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.819 [2024-07-14 10:28:12.571433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.819 [2024-07-14 10:28:12.571452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.819 [2024-07-14 10:28:12.586098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.819 [2024-07-14 10:28:12.586116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.819 [2024-07-14 10:28:12.596638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.819 [2024-07-14 10:28:12.596657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.819 [2024-07-14 10:28:12.605993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.819 [2024-07-14 10:28:12.606012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.819 [2024-07-14 10:28:12.614732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.819 [2024-07-14 10:28:12.614752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.819 [2024-07-14 10:28:12.624432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.819 [2024-07-14 10:28:12.624451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.819 [2024-07-14 10:28:12.638350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.819 [2024-07-14 10:28:12.638368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.819 [2024-07-14 10:28:12.647285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.819 [2024-07-14 10:28:12.647303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.819 [2024-07-14 10:28:12.656104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.819 [2024-07-14 10:28:12.656123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.819 [2024-07-14 10:28:12.671064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.819 [2024-07-14 10:28:12.671083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.819 [2024-07-14 10:28:12.686845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.819 [2024-07-14 10:28:12.686864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.819 [2024-07-14 10:28:12.700639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.819 [2024-07-14 10:28:12.700658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.819 [2024-07-14 10:28:12.714254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.819 [2024-07-14 10:28:12.714273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.819 [2024-07-14 10:28:12.723207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.819 [2024-07-14 10:28:12.723237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.819 [2024-07-14 10:28:12.737434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.819 [2024-07-14 10:28:12.737453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.819 [2024-07-14 10:28:12.746305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.819 [2024-07-14 10:28:12.746323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.819 [2024-07-14 10:28:12.760550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.819 [2024-07-14 10:28:12.760569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.819 [2024-07-14 10:28:12.769457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.819 [2024-07-14 10:28:12.769476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.819 [2024-07-14 10:28:12.778368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.819 [2024-07-14 10:28:12.778386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.819 [2024-07-14 10:28:12.792905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.819 [2024-07-14 10:28:12.792924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.078 [2024-07-14 10:28:12.806920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.078 [2024-07-14 10:28:12.806939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.078 [2024-07-14 10:28:12.820474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.078 [2024-07-14 10:28:12.820493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.078 [2024-07-14 10:28:12.834337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.078 [2024-07-14 10:28:12.834355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.079 [2024-07-14 10:28:12.843269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.079 [2024-07-14 10:28:12.843287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.079 [2024-07-14 10:28:12.857070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.079 [2024-07-14 10:28:12.857089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.079 [2024-07-14 10:28:12.871041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.079 [2024-07-14 10:28:12.871060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.079 [2024-07-14 10:28:12.882081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.079 [2024-07-14 10:28:12.882100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.079 [2024-07-14 10:28:12.896398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.079 [2024-07-14 10:28:12.896425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.079 [2024-07-14 10:28:12.905376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.079 [2024-07-14 10:28:12.905395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.079 [2024-07-14 10:28:12.919457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.079 [2024-07-14 10:28:12.919476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.079 [2024-07-14 10:28:12.928275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.079 [2024-07-14 10:28:12.928293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.079 [2024-07-14 10:28:12.942235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.079 [2024-07-14 10:28:12.942254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.079 [2024-07-14 10:28:12.950949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.079 [2024-07-14 10:28:12.950975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.079 [2024-07-14 10:28:12.965208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.079 [2024-07-14 10:28:12.965232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.079 [2024-07-14 10:28:12.974175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.079 [2024-07-14 10:28:12.974193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.079 [2024-07-14 10:28:12.983174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.079 [2024-07-14 10:28:12.983193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.079 [2024-07-14 10:28:12.997446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.079 [2024-07-14 10:28:12.997465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.079 [2024-07-14 10:28:13.011215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.079 [2024-07-14 10:28:13.011239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.079 [2024-07-14 10:28:13.019884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.079 [2024-07-14 10:28:13.019903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.079 [2024-07-14 10:28:13.029040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.079 [2024-07-14 10:28:13.029058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.079 [2024-07-14 10:28:13.038283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.079 [2024-07-14 10:28:13.038302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.079 [2024-07-14 10:28:13.052995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.079 [2024-07-14 10:28:13.053014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.338 [2024-07-14 10:28:13.067051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.338 [2024-07-14 10:28:13.067070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.338 [2024-07-14 10:28:13.080766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.338 [2024-07-14 10:28:13.080784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.338 [2024-07-14 10:28:13.089697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.338 [2024-07-14 10:28:13.089715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.338 [2024-07-14 10:28:13.104071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.338 [2024-07-14 10:28:13.104089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.338 [2024-07-14 10:28:13.117778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.338 [2024-07-14 10:28:13.117796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.338 [2024-07-14 10:28:13.126724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.338 [2024-07-14 10:28:13.126742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.338 [2024-07-14 10:28:13.135637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.338 [2024-07-14 10:28:13.135655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.338 [2024-07-14 10:28:13.144762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.338 [2024-07-14 10:28:13.144780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.338 [2024-07-14 10:28:13.153843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.338 [2024-07-14 10:28:13.153861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.338 [2024-07-14 10:28:13.168130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.338 [2024-07-14 10:28:13.168152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.338 [2024-07-14 10:28:13.181694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.338 [2024-07-14 10:28:13.181713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.338 [2024-07-14 10:28:13.195699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.338 [2024-07-14 10:28:13.195719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.338 [2024-07-14 10:28:13.204739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.338 [2024-07-14 10:28:13.204759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.338 [2024-07-14 10:28:13.213538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.338 [2024-07-14 10:28:13.213557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.338 [2024-07-14 10:28:13.228077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.338 [2024-07-14 10:28:13.228098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.338 [2024-07-14 10:28:13.242030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.338 [2024-07-14 10:28:13.242050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.338 [2024-07-14 10:28:13.251012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.338 [2024-07-14 10:28:13.251031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.338 [2024-07-14 10:28:13.259694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.338 [2024-07-14 10:28:13.259713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.338 [2024-07-14 10:28:13.274608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.338 [2024-07-14 10:28:13.274627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.338 [2024-07-14 10:28:13.290321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.338 [2024-07-14 10:28:13.290341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.338 [2024-07-14 10:28:13.304645] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.338 [2024-07-14 10:28:13.304664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.338 [2024-07-14 10:28:13.315472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.338 [2024-07-14 10:28:13.315491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.597 [2024-07-14 10:28:13.329642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.597 [2024-07-14 10:28:13.329662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.597 [2024-07-14 10:28:13.338453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.597 [2024-07-14 10:28:13.338472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.597 [2024-07-14 10:28:13.352744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.597 [2024-07-14 10:28:13.352763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.597 [2024-07-14 10:28:13.361896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.597 [2024-07-14 10:28:13.361915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.597 [2024-07-14 10:28:13.376445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.597 [2024-07-14 10:28:13.376464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.597 [2024-07-14 10:28:13.385304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.597 [2024-07-14 10:28:13.385323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.597 [2024-07-14 10:28:13.394125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.597 [2024-07-14 10:28:13.394149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.597 [2024-07-14 10:28:13.408554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.597 [2024-07-14 10:28:13.408574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.597 [2024-07-14 10:28:13.422203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.597 [2024-07-14 10:28:13.422229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.597 [2024-07-14 10:28:13.431299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.597 [2024-07-14 10:28:13.431318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.597 [2024-07-14 10:28:13.439982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.597 [2024-07-14 10:28:13.440001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.597 [2024-07-14 10:28:13.449316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.597 [2024-07-14 10:28:13.449335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.597 [2024-07-14 10:28:13.463568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.597 [2024-07-14 10:28:13.463586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.597 [2024-07-14 10:28:13.472636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.597 [2024-07-14 10:28:13.472654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.597 [2024-07-14 10:28:13.486921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.597 [2024-07-14 10:28:13.486940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.597 [2024-07-14 10:28:13.495677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.597 [2024-07-14 10:28:13.495696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.597 [2024-07-14 10:28:13.504197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.597 [2024-07-14 10:28:13.504216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.597 [2024-07-14 10:28:13.518842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.597 [2024-07-14 10:28:13.518861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.597 [2024-07-14 10:28:13.532794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.597 [2024-07-14 10:28:13.532813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.597 [2024-07-14 10:28:13.546572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.597 [2024-07-14 10:28:13.546591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.597 [2024-07-14 10:28:13.555304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.597 [2024-07-14 10:28:13.555323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.597 [2024-07-14 10:28:13.569338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.597 [2024-07-14 10:28:13.569357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.856 [2024-07-14 10:28:13.582820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.856 [2024-07-14 10:28:13.582840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.856 [2024-07-14 10:28:13.596447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.856 [2024-07-14 10:28:13.596466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.856 [2024-07-14 10:28:13.609933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.856 [2024-07-14 10:28:13.609952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.856 [2024-07-14 10:28:13.623794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.856 [2024-07-14 10:28:13.623817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.856 [2024-07-14 10:28:13.632705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.856 [2024-07-14 10:28:13.632723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.856 [2024-07-14 10:28:13.646997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.856 [2024-07-14 10:28:13.647016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.856 [2024-07-14 10:28:13.656089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.856 [2024-07-14 10:28:13.656107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.856 [2024-07-14 10:28:13.670449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.856 [2024-07-14 10:28:13.670467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.856 [2024-07-14 10:28:13.684219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.856 [2024-07-14 10:28:13.684245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.856 [2024-07-14 10:28:13.697763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.856 [2024-07-14 10:28:13.697783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.856 [2024-07-14 10:28:13.711701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.856 [2024-07-14 10:28:13.711720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.856 [2024-07-14 10:28:13.724985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.856 [2024-07-14 10:28:13.725004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.856 [2024-07-14 10:28:13.733947] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.856 [2024-07-14 10:28:13.733966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.856 [2024-07-14 10:28:13.743024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.856 [2024-07-14 10:28:13.743043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.856 [2024-07-14 10:28:13.757621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.856 [2024-07-14 10:28:13.757640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.856 [2024-07-14 10:28:13.768410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.856 [2024-07-14 10:28:13.768429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.856 [2024-07-14 10:28:13.777129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.856 [2024-07-14 10:28:13.777148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.856 [2024-07-14 10:28:13.785680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.856 [2024-07-14 10:28:13.785698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.856 [2024-07-14 10:28:13.795640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.856 [2024-07-14 10:28:13.795659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.856 [2024-07-14 10:28:13.809974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.856 [2024-07-14 10:28:13.809993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.856 [2024-07-14 10:28:13.823840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.856 [2024-07-14 10:28:13.823858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.856 [2024-07-14 10:28:13.837510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.856 [2024-07-14 10:28:13.837529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.116 [2024-07-14 10:28:13.851290] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.116 [2024-07-14 10:28:13.851310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.116 [2024-07-14 10:28:13.860199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.116 [2024-07-14 10:28:13.860218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.116 [2024-07-14 10:28:13.874913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.116 [2024-07-14 10:28:13.874931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.116 [2024-07-14 10:28:13.890639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.116 [2024-07-14 10:28:13.890657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.116 [2024-07-14 10:28:13.904795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.116 [2024-07-14 10:28:13.904813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.116 [2024-07-14 10:28:13.913584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.116 [2024-07-14 10:28:13.913602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.116 [2024-07-14 10:28:13.927976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.116 [2024-07-14 10:28:13.927995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.116 [2024-07-14 10:28:13.936782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.116 [2024-07-14 10:28:13.936801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.116 [2024-07-14 10:28:13.951159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.116 [2024-07-14 10:28:13.951177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.116 [2024-07-14 10:28:13.964895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.116 [2024-07-14 10:28:13.964914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.116 [2024-07-14 10:28:13.978697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.116 [2024-07-14 10:28:13.978716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.116 [2024-07-14 10:28:13.992534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.116 [2024-07-14 10:28:13.992551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.116 [2024-07-14 10:28:14.006491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.116 [2024-07-14 10:28:14.006509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.116 [2024-07-14 10:28:14.020154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.116 [2024-07-14 10:28:14.020173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.116 [2024-07-14 10:28:14.034014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.116 [2024-07-14 10:28:14.034033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.116 [2024-07-14 10:28:14.042960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.116 [2024-07-14 10:28:14.042978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.116 [2024-07-14 10:28:14.052069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.116 [2024-07-14 10:28:14.052087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.116 [2024-07-14 10:28:14.066384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.116 [2024-07-14 10:28:14.066402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.116 [2024-07-14 10:28:14.080182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.116 [2024-07-14 10:28:14.080201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.116 [2024-07-14 10:28:14.094495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.116 [2024-07-14 10:28:14.094514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.376 [2024-07-14 10:28:14.101905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.376 [2024-07-14 10:28:14.101924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.376 [2024-07-14 10:28:14.115623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.376 [2024-07-14 10:28:14.115642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.376 [2024-07-14 10:28:14.129691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.376 [2024-07-14 10:28:14.129710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.376 [2024-07-14 10:28:14.143426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.376 [2024-07-14 10:28:14.143445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.376 [2024-07-14 10:28:14.157560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.376 [2024-07-14 10:28:14.157579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.376 [2024-07-14 10:28:14.166328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.376 [2024-07-14 10:28:14.166347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.376 [2024-07-14 10:28:14.175766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.376 [2024-07-14 10:28:14.175784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.376 [2024-07-14 10:28:14.184942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.376 [2024-07-14 10:28:14.184961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.376 [2024-07-14 10:28:14.199271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.376 [2024-07-14 10:28:14.199289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.376 [2024-07-14 10:28:14.208137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.376 [2024-07-14 10:28:14.208155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.376 [2024-07-14 10:28:14.222207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.376 [2024-07-14 10:28:14.222233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.376 [2024-07-14 10:28:14.236142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.376 [2024-07-14 10:28:14.236161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.376 [2024-07-14 10:28:14.250219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.376 [2024-07-14 10:28:14.250243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.376 [2024-07-14 10:28:14.261271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.376 [2024-07-14 10:28:14.261290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.376 [2024-07-14 10:28:14.275502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.376 [2024-07-14 10:28:14.275520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.376 [2024-07-14 10:28:14.289125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.376 [2024-07-14 10:28:14.289143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.376 [2024-07-14 10:28:14.297887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.376 [2024-07-14 10:28:14.297905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.376 [2024-07-14 10:28:14.312155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.376 [2024-07-14 10:28:14.312174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.376 [2024-07-14 10:28:14.325866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.376 [2024-07-14 10:28:14.325886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.376 [2024-07-14 10:28:14.339862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.376 [2024-07-14 10:28:14.339881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.376 [2024-07-14 10:28:14.348607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.376 [2024-07-14 10:28:14.348626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.376 [2024-07-14 10:28:14.357366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.376 [2024-07-14 10:28:14.357386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.637 [2024-07-14 10:28:14.366844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.637 [2024-07-14 10:28:14.366863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.637 [2024-07-14 10:28:14.381394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.637 [2024-07-14 10:28:14.381414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.637 [2024-07-14 10:28:14.390455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.637 [2024-07-14 10:28:14.390474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.637 [2024-07-14 10:28:14.404661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.637 [2024-07-14 10:28:14.404680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.637 [2024-07-14 10:28:14.418242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.637 [2024-07-14 10:28:14.418262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.637 [2024-07-14 10:28:14.427328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.637 [2024-07-14 10:28:14.427350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.637 [2024-07-14 10:28:14.442337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.637 [2024-07-14 10:28:14.442356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.637 [2024-07-14 10:28:14.450120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.637 [2024-07-14 10:28:14.450137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.637 [2024-07-14 10:28:14.463594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.637 [2024-07-14 10:28:14.463613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.637 [2024-07-14 10:28:14.478095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.637 [2024-07-14 10:28:14.478114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.637 [2024-07-14 10:28:14.489129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.637 [2024-07-14 10:28:14.489148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.637 [2024-07-14 10:28:14.503264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.637 [2024-07-14 10:28:14.503282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.637 [2024-07-14 10:28:14.511992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.637 [2024-07-14 10:28:14.512010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.637 [2024-07-14 10:28:14.526344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.637 [2024-07-14 10:28:14.526363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.637 [2024-07-14 10:28:14.539711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.637 [2024-07-14 10:28:14.539734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.637 [2024-07-14 10:28:14.553578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.637 [2024-07-14 10:28:14.553597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.637 [2024-07-14 10:28:14.567850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.637 [2024-07-14 10:28:14.567869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.637 [2024-07-14 10:28:14.581834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.637 [2024-07-14 10:28:14.581853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.637 [2024-07-14 10:28:14.590582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.637 [2024-07-14 10:28:14.590600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.637 [2024-07-14 10:28:14.599415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.637 [2024-07-14 10:28:14.599435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.637 [2024-07-14 10:28:14.608108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.637 [2024-07-14 10:28:14.608127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.896 [2024-07-14 10:28:14.622803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.896 [2024-07-14 10:28:14.622825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.896 [2024-07-14 10:28:14.631671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.896 [2024-07-14 10:28:14.631691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.896 [2024-07-14 10:28:14.640925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.896 [2024-07-14 10:28:14.640945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.896 [2024-07-14 10:28:14.649152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.896 [2024-07-14 10:28:14.649171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.896 00:19:29.896 Latency(us) 00:19:29.896 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.896 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:19:29.896 Nvme1n1 : 5.01 16814.63 131.36 0.00 0.00 7604.60 3390.78 19375.86 00:19:29.896 =================================================================================================================== 00:19:29.896 Total : 16814.63 131.36 0.00 0.00 7604.60 3390.78 19375.86 00:19:29.896 [2024-07-14 10:28:14.659714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.896 [2024-07-14 10:28:14.659733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.896 [2024-07-14 10:28:14.671739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.896 [2024-07-14 10:28:14.671753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.896 [2024-07-14 10:28:14.683787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.896 [2024-07-14 10:28:14.683807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.896 [2024-07-14 10:28:14.695812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.897 [2024-07-14 10:28:14.695828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.897 [2024-07-14 10:28:14.707843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.897 [2024-07-14 10:28:14.707858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.897 [2024-07-14 10:28:14.719866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.897 [2024-07-14 10:28:14.719887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.897 [2024-07-14 10:28:14.739926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.897 [2024-07-14 10:28:14.739947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.897 [2024-07-14 10:28:14.751960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.897 [2024-07-14 10:28:14.751976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.897 [2024-07-14 10:28:14.763986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.897 [2024-07-14 10:28:14.763997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.897 [2024-07-14 10:28:14.776018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.897 [2024-07-14 10:28:14.776031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.897 [2024-07-14 10:28:14.788049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.897 [2024-07-14 10:28:14.788060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.897 [2024-07-14 10:28:14.800084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.897 [2024-07-14 10:28:14.800095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.897 [2024-07-14 10:28:14.812119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.897 [2024-07-14 10:28:14.812132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.897 [2024-07-14 10:28:14.824152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.897 [2024-07-14 10:28:14.824166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2397419) - No such process 00:19:29.897 10:28:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2397419 00:19:29.897 10:28:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:29.897 10:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.897 10:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:29.897 10:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.897 10:28:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:29.897 10:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.897 10:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:29.897 delay0 00:19:29.897 10:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.897 10:28:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:19:29.897 10:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.897 10:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:29.897 10:28:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.897 10:28:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:19:30.156 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.156 [2024-07-14 10:28:14.999438] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:36.768 Initializing NVMe Controllers 00:19:36.768 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:36.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:36.768 Initialization complete. Launching workers. 00:19:36.768 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1244 00:19:36.768 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1531, failed to submit 33 00:19:36.768 success 1358, unsuccess 173, failed 0 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:36.768 rmmod nvme_tcp 00:19:36.768 rmmod nvme_fabrics 00:19:36.768 rmmod nvme_keyring 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2395644 ']' 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2395644 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 2395644 ']' 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 2395644 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2395644 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2395644' 00:19:36.768 killing process with pid 2395644 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 2395644 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 2395644 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:36.768 10:28:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.674 10:28:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:38.674 00:19:38.674 real 0m31.109s 00:19:38.674 user 0m42.266s 00:19:38.674 sys 0m10.558s 00:19:38.674 10:28:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:38.674 10:28:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:38.674 ************************************ 00:19:38.674 END TEST nvmf_zcopy 00:19:38.674 ************************************ 00:19:38.674 10:28:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:38.674 10:28:23 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:38.674 10:28:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:38.674 10:28:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:38.674 10:28:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:38.674 ************************************ 00:19:38.674 START TEST nvmf_nmic 00:19:38.674 ************************************ 00:19:38.674 10:28:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:38.933 * Looking for test storage... 00:19:38.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:19:38.933 10:28:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:44.385 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:44.385 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:44.385 Found net devices under 0000:86:00.0: cvl_0_0 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:44.385 Found net devices under 0000:86:00.1: cvl_0_1 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:44.385 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:44.386 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:44.386 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:44.386 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:44.386 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:44.386 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:44.386 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:44.386 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:44.386 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:44.386 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:44.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:44.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:19:44.644 00:19:44.644 --- 10.0.0.2 ping statistics --- 00:19:44.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.644 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:44.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:44.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:19:44.644 00:19:44.644 --- 10.0.0.1 ping statistics --- 00:19:44.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.644 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2402845 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2402845 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 2402845 ']' 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:44.644 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:44.644 [2024-07-14 10:28:29.601920] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:19:44.644 [2024-07-14 10:28:29.601967] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.903 EAL: No free 2048 kB hugepages reported on node 1 00:19:44.903 [2024-07-14 10:28:29.671982] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:44.903 [2024-07-14 10:28:29.715375] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.903 [2024-07-14 10:28:29.715414] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.903 [2024-07-14 10:28:29.715421] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:44.903 [2024-07-14 10:28:29.715427] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:44.903 [2024-07-14 10:28:29.715433] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.903 [2024-07-14 10:28:29.715488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.903 [2024-07-14 10:28:29.715597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:44.903 [2024-07-14 10:28:29.715701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.903 [2024-07-14 10:28:29.715703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:44.903 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:44.903 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:19:44.903 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:44.903 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:44.903 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:44.903 10:28:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.903 10:28:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:44.903 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.903 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:44.903 [2024-07-14 10:28:29.852317] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.903 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.903 10:28:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:44.903 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.903 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:44.903 Malloc0 00:19:44.903 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.903 10:28:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:44.903 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.903 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:45.162 [2024-07-14 10:28:29.904037] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:45.162 test case1: single bdev can't be used in multiple subsystems 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:45.162 [2024-07-14 10:28:29.927962] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:45.162 [2024-07-14 10:28:29.927982] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:45.162 [2024-07-14 10:28:29.927989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.162 request: 00:19:45.162 { 00:19:45.162 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:45.162 "namespace": { 00:19:45.162 "bdev_name": "Malloc0", 00:19:45.162 "no_auto_visible": false 00:19:45.162 }, 00:19:45.162 "method": "nvmf_subsystem_add_ns", 00:19:45.162 "req_id": 1 00:19:45.162 } 00:19:45.162 Got JSON-RPC error response 00:19:45.162 response: 00:19:45.162 { 00:19:45.162 "code": -32602, 00:19:45.162 "message": "Invalid parameters" 00:19:45.162 } 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:45.162 Adding namespace failed - expected result. 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:45.162 test case2: host connect to nvmf target in multiple paths 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:45.162 [2024-07-14 10:28:29.940087] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.162 10:28:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:46.099 10:28:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:47.474 10:28:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:47.474 10:28:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:19:47.474 10:28:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:47.474 10:28:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:47.474 10:28:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:19:49.393 10:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:49.393 10:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:49.393 10:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:49.394 10:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:49.394 10:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:49.394 10:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:19:49.394 10:28:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:49.394 [global] 00:19:49.394 thread=1 00:19:49.394 invalidate=1 00:19:49.394 rw=write 00:19:49.394 time_based=1 00:19:49.394 runtime=1 00:19:49.394 ioengine=libaio 00:19:49.394 direct=1 00:19:49.394 bs=4096 00:19:49.394 iodepth=1 00:19:49.394 norandommap=0 00:19:49.394 numjobs=1 00:19:49.394 00:19:49.394 verify_dump=1 00:19:49.394 verify_backlog=512 00:19:49.394 verify_state_save=0 00:19:49.394 do_verify=1 00:19:49.394 verify=crc32c-intel 00:19:49.394 [job0] 00:19:49.394 filename=/dev/nvme0n1 00:19:49.394 Could not set queue depth (nvme0n1) 00:19:49.659 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:49.659 fio-3.35 00:19:49.659 Starting 1 thread 00:19:51.036 00:19:51.036 job0: (groupid=0, jobs=1): err= 0: pid=2403701: Sun Jul 14 10:28:35 2024 00:19:51.036 read: IOPS=22, BW=89.4KiB/s (91.6kB/s)(92.0KiB/1029msec) 00:19:51.036 slat (nsec): min=10246, max=22204, avg=20726.61, stdev=2310.95 00:19:51.036 clat (usec): min=40887, max=41409, avg=40982.45, stdev=105.01 00:19:51.036 lat (usec): min=40909, max=41419, avg=41003.18, stdev=102.90 00:19:51.036 clat percentiles (usec): 00:19:51.036 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:51.036 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:51.036 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:51.036 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:51.036 | 99.99th=[41157] 00:19:51.036 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:19:51.036 slat (nsec): min=9953, max=42797, avg=11344.12, stdev=2167.27 00:19:51.036 clat (usec): min=124, max=304, avg=152.78, stdev= 9.89 00:19:51.036 lat (usec): min=141, max=343, avg=164.12, stdev=10.63 00:19:51.036 clat percentiles (usec): 00:19:51.036 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 149], 00:19:51.036 | 30.00th=[ 151], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 153], 00:19:51.036 | 70.00th=[ 155], 80.00th=[ 157], 90.00th=[ 159], 95.00th=[ 161], 00:19:51.036 | 99.00th=[ 167], 99.50th=[ 188], 99.90th=[ 306], 99.95th=[ 306], 00:19:51.036 | 99.99th=[ 306] 00:19:51.036 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:19:51.036 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:51.036 lat (usec) : 250=95.33%, 500=0.37% 00:19:51.036 lat (msec) : 50=4.30% 00:19:51.036 cpu : usr=0.00%, sys=1.36%, ctx=535, majf=0, minf=2 00:19:51.036 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:51.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.036 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.036 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.036 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:51.036 00:19:51.036 Run status group 0 (all jobs): 00:19:51.036 READ: bw=89.4KiB/s (91.6kB/s), 89.4KiB/s-89.4KiB/s (91.6kB/s-91.6kB/s), io=92.0KiB (94.2kB), run=1029-1029msec 00:19:51.036 WRITE: bw=1990KiB/s (2038kB/s), 1990KiB/s-1990KiB/s (2038kB/s-2038kB/s), io=2048KiB (2097kB), run=1029-1029msec 00:19:51.036 00:19:51.036 Disk stats (read/write): 00:19:51.036 nvme0n1: ios=69/512, merge=0/0, ticks=793/73, in_queue=866, util=91.38% 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:51.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:51.036 rmmod nvme_tcp 00:19:51.036 rmmod nvme_fabrics 00:19:51.036 rmmod nvme_keyring 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2402845 ']' 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2402845 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 2402845 ']' 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 2402845 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2402845 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2402845' 00:19:51.036 killing process with pid 2402845 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 2402845 00:19:51.036 10:28:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 2402845 00:19:51.296 10:28:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:51.296 10:28:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:51.296 10:28:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:51.296 10:28:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:51.296 10:28:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:51.296 10:28:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.296 10:28:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:51.296 10:28:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.204 10:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:53.204 00:19:53.204 real 0m14.541s 00:19:53.204 user 0m32.295s 00:19:53.204 sys 0m5.058s 00:19:53.204 10:28:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:53.204 10:28:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:53.204 ************************************ 00:19:53.204 END TEST nvmf_nmic 00:19:53.204 ************************************ 00:19:53.463 10:28:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:53.463 10:28:38 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:53.463 10:28:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:53.463 10:28:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:53.463 10:28:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:53.463 ************************************ 00:19:53.463 START TEST nvmf_fio_target 00:19:53.463 ************************************ 00:19:53.463 10:28:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:53.463 * Looking for test storage... 00:19:53.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:53.463 10:28:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:53.463 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:19:53.463 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:53.463 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:53.463 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:53.463 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:53.463 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:53.463 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:53.463 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:53.463 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:53.463 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:53.463 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:53.463 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:53.463 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:53.463 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:53.463 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:53.463 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:53.463 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:53.463 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:53.463 10:28:38 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:53.463 10:28:38 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:53.463 10:28:38 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:53.463 10:28:38 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.463 10:28:38 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.464 10:28:38 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.464 10:28:38 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:19:53.464 10:28:38 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.464 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:19:53.464 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:53.464 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:53.464 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:53.464 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:53.464 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:53.464 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:53.464 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:53.464 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:53.464 10:28:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:53.464 10:28:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:53.464 10:28:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:53.464 10:28:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:19:53.464 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:53.464 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:53.464 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:53.464 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:53.464 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:53.464 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.464 10:28:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.464 10:28:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.464 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:53.464 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:53.464 10:28:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:53.464 10:28:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:00.092 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:00.092 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:00.092 Found net devices under 0000:86:00.0: cvl_0_0 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:00.092 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:00.093 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:00.093 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.093 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:00.093 Found net devices under 0000:86:00.1: cvl_0_1 00:20:00.093 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.093 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:00.093 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:20:00.093 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:00.093 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:00.093 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:00.093 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.093 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:00.093 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:00.093 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:00.093 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:00.093 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:00.093 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:00.093 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:00.093 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.093 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:00.093 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:00.093 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:00.093 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:00.093 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:00.093 10:28:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:00.093 10:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:00.093 10:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:00.093 10:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:00.093 10:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:00.093 10:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:00.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:20:00.093 00:20:00.093 --- 10.0.0.2 ping statistics --- 00:20:00.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.093 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:20:00.093 10:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:00.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:20:00.093 00:20:00.093 --- 10.0.0.1 ping statistics --- 00:20:00.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.093 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:20:00.093 10:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.093 10:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:20:00.093 10:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:00.093 10:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.093 10:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:00.093 10:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:00.093 10:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.093 10:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:00.093 10:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:00.093 10:28:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:20:00.093 10:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:00.093 10:28:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:00.093 10:28:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.093 10:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2407454 00:20:00.093 10:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:00.093 10:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2407454 00:20:00.093 10:28:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 2407454 ']' 00:20:00.093 10:28:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.093 10:28:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.093 10:28:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.093 10:28:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.093 10:28:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.093 [2024-07-14 10:28:44.204357] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:20:00.093 [2024-07-14 10:28:44.204403] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.093 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.093 [2024-07-14 10:28:44.275961] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:00.093 [2024-07-14 10:28:44.315641] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.093 [2024-07-14 10:28:44.315684] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.093 [2024-07-14 10:28:44.315690] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.093 [2024-07-14 10:28:44.315696] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.093 [2024-07-14 10:28:44.315702] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.093 [2024-07-14 10:28:44.315822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.093 [2024-07-14 10:28:44.315932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.093 [2024-07-14 10:28:44.316040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.093 [2024-07-14 10:28:44.316041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:00.093 10:28:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:00.093 10:28:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:20:00.093 10:28:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:00.093 10:28:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:00.093 10:28:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.093 10:28:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.093 10:28:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:00.352 [2024-07-14 10:28:45.205748] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.352 10:28:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:00.611 10:28:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:20:00.611 10:28:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:00.870 10:28:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:20:00.870 10:28:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:00.870 10:28:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:20:00.870 10:28:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:01.128 10:28:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:20:01.128 10:28:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:20:01.387 10:28:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:01.645 10:28:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:20:01.645 10:28:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:01.645 10:28:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:20:01.646 10:28:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:01.904 10:28:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:20:01.904 10:28:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:20:02.163 10:28:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:02.422 10:28:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:02.422 10:28:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:02.422 10:28:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:02.422 10:28:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:02.680 10:28:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:02.939 [2024-07-14 10:28:47.679838] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.939 10:28:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:20:02.939 10:28:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:20:03.198 10:28:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:04.575 10:28:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:20:04.575 10:28:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:20:04.575 10:28:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:04.575 10:28:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:20:04.575 10:28:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:20:04.575 10:28:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:20:06.478 10:28:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:06.478 10:28:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:06.478 10:28:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:06.478 10:28:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:20:06.478 10:28:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:06.478 10:28:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:20:06.478 10:28:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:06.478 [global] 00:20:06.478 thread=1 00:20:06.478 invalidate=1 00:20:06.478 rw=write 00:20:06.478 time_based=1 00:20:06.478 runtime=1 00:20:06.478 ioengine=libaio 00:20:06.478 direct=1 00:20:06.478 bs=4096 00:20:06.478 iodepth=1 00:20:06.478 norandommap=0 00:20:06.478 numjobs=1 00:20:06.478 00:20:06.478 verify_dump=1 00:20:06.478 verify_backlog=512 00:20:06.478 verify_state_save=0 00:20:06.478 do_verify=1 00:20:06.478 verify=crc32c-intel 00:20:06.478 [job0] 00:20:06.478 filename=/dev/nvme0n1 00:20:06.478 [job1] 00:20:06.478 filename=/dev/nvme0n2 00:20:06.478 [job2] 00:20:06.478 filename=/dev/nvme0n3 00:20:06.478 [job3] 00:20:06.478 filename=/dev/nvme0n4 00:20:06.478 Could not set queue depth (nvme0n1) 00:20:06.478 Could not set queue depth (nvme0n2) 00:20:06.478 Could not set queue depth (nvme0n3) 00:20:06.478 Could not set queue depth (nvme0n4) 00:20:06.737 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:06.737 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:06.737 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:06.737 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:06.737 fio-3.35 00:20:06.737 Starting 4 threads 00:20:08.115 00:20:08.115 job0: (groupid=0, jobs=1): err= 0: pid=2408811: Sun Jul 14 10:28:52 2024 00:20:08.115 read: IOPS=49, BW=196KiB/s (201kB/s)(204KiB/1039msec) 00:20:08.115 slat (nsec): min=6614, max=25368, avg=14077.02, stdev=7489.19 00:20:08.115 clat (usec): min=221, max=42111, avg=18732.21, stdev=20453.91 00:20:08.115 lat (usec): min=229, max=42136, avg=18746.28, stdev=20460.71 00:20:08.115 clat percentiles (usec): 00:20:08.115 | 1.00th=[ 223], 5.00th=[ 253], 10.00th=[ 269], 20.00th=[ 310], 00:20:08.115 | 30.00th=[ 449], 40.00th=[ 465], 50.00th=[ 502], 60.00th=[41157], 00:20:08.115 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:20:08.115 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:08.115 | 99.99th=[42206] 00:20:08.115 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:20:08.115 slat (nsec): min=3999, max=20748, avg=8672.24, stdev=2027.42 00:20:08.115 clat (usec): min=125, max=253, avg=149.71, stdev=11.32 00:20:08.115 lat (usec): min=130, max=274, avg=158.38, stdev=12.11 00:20:08.115 clat percentiles (usec): 00:20:08.115 | 1.00th=[ 128], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:20:08.115 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 151], 00:20:08.115 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 163], 95.00th=[ 172], 00:20:08.115 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 253], 99.95th=[ 253], 00:20:08.115 | 99.99th=[ 253] 00:20:08.115 bw ( KiB/s): min= 4096, max= 4096, per=16.62%, avg=4096.00, stdev= 0.00, samples=1 00:20:08.115 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:08.115 lat (usec) : 250=91.12%, 500=4.26%, 750=0.53% 00:20:08.115 lat (msec) : 50=4.09% 00:20:08.115 cpu : usr=0.19%, sys=0.48%, ctx=563, majf=0, minf=2 00:20:08.115 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:08.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.115 issued rwts: total=51,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:08.115 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:08.116 job1: (groupid=0, jobs=1): err= 0: pid=2408820: Sun Jul 14 10:28:52 2024 00:20:08.116 read: IOPS=1611, BW=6446KiB/s (6600kB/s)(6452KiB/1001msec) 00:20:08.116 slat (nsec): min=6319, max=27674, avg=7642.85, stdev=1858.57 00:20:08.116 clat (usec): min=186, max=40686, avg=338.36, stdev=1008.57 00:20:08.116 lat (usec): min=193, max=40694, avg=346.01, stdev=1008.59 00:20:08.116 clat percentiles (usec): 00:20:08.116 | 1.00th=[ 206], 5.00th=[ 229], 10.00th=[ 237], 20.00th=[ 253], 00:20:08.116 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 302], 00:20:08.116 | 70.00th=[ 326], 80.00th=[ 343], 90.00th=[ 469], 95.00th=[ 490], 00:20:08.116 | 99.00th=[ 627], 99.50th=[ 644], 99.90th=[ 717], 99.95th=[40633], 00:20:08.116 | 99.99th=[40633] 00:20:08.116 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:20:08.116 slat (usec): min=6, max=33991, avg=26.87, stdev=750.89 00:20:08.116 clat (usec): min=115, max=404, avg=185.16, stdev=43.96 00:20:08.116 lat (usec): min=125, max=34395, avg=212.03, stdev=756.99 00:20:08.116 clat percentiles (usec): 00:20:08.116 | 1.00th=[ 124], 5.00th=[ 131], 10.00th=[ 137], 20.00th=[ 145], 00:20:08.116 | 30.00th=[ 153], 40.00th=[ 163], 50.00th=[ 178], 60.00th=[ 190], 00:20:08.116 | 70.00th=[ 206], 80.00th=[ 235], 90.00th=[ 249], 95.00th=[ 260], 00:20:08.116 | 99.00th=[ 293], 99.50th=[ 310], 99.90th=[ 322], 99.95th=[ 330], 00:20:08.116 | 99.99th=[ 404] 00:20:08.116 bw ( KiB/s): min= 8192, max= 8192, per=33.24%, avg=8192.00, stdev= 0.00, samples=1 00:20:08.116 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:20:08.116 lat (usec) : 250=58.21%, 500=40.26%, 750=1.50% 00:20:08.116 lat (msec) : 50=0.03% 00:20:08.116 cpu : usr=2.00%, sys=3.30%, ctx=3665, majf=0, minf=1 00:20:08.116 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:08.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.116 issued rwts: total=1613,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:08.116 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:08.116 job2: (groupid=0, jobs=1): err= 0: pid=2408833: Sun Jul 14 10:28:52 2024 00:20:08.116 read: IOPS=1026, BW=4108KiB/s (4206kB/s)(4124KiB/1004msec) 00:20:08.116 slat (nsec): min=6352, max=22795, avg=7387.11, stdev=1410.26 00:20:08.116 clat (usec): min=199, max=41956, avg=683.27, stdev=3790.98 00:20:08.116 lat (usec): min=206, max=41977, avg=690.66, stdev=3791.83 00:20:08.116 clat percentiles (usec): 00:20:08.116 | 1.00th=[ 210], 5.00th=[ 227], 10.00th=[ 243], 20.00th=[ 273], 00:20:08.116 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 310], 60.00th=[ 330], 00:20:08.116 | 70.00th=[ 338], 80.00th=[ 359], 90.00th=[ 478], 95.00th=[ 494], 00:20:08.116 | 99.00th=[ 529], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:20:08.116 | 99.99th=[42206] 00:20:08.116 write: IOPS=1529, BW=6120KiB/s (6266kB/s)(6144KiB/1004msec); 0 zone resets 00:20:08.116 slat (nsec): min=4268, max=33370, avg=9970.07, stdev=1654.32 00:20:08.116 clat (usec): min=120, max=286, avg=175.96, stdev=32.60 00:20:08.116 lat (usec): min=130, max=296, avg=185.93, stdev=32.69 00:20:08.116 clat percentiles (usec): 00:20:08.116 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 139], 20.00th=[ 147], 00:20:08.116 | 30.00th=[ 153], 40.00th=[ 161], 50.00th=[ 174], 60.00th=[ 184], 00:20:08.116 | 70.00th=[ 192], 80.00th=[ 202], 90.00th=[ 225], 95.00th=[ 241], 00:20:08.116 | 99.00th=[ 262], 99.50th=[ 265], 99.90th=[ 277], 99.95th=[ 285], 00:20:08.116 | 99.99th=[ 285] 00:20:08.116 bw ( KiB/s): min= 4096, max= 8192, per=24.93%, avg=6144.00, stdev=2896.31, samples=2 00:20:08.116 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:20:08.116 lat (usec) : 250=63.15%, 500=35.18%, 750=1.32% 00:20:08.116 lat (msec) : 50=0.35% 00:20:08.116 cpu : usr=1.79%, sys=1.69%, ctx=2567, majf=0, minf=1 00:20:08.116 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:08.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.116 issued rwts: total=1031,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:08.116 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:08.116 job3: (groupid=0, jobs=1): err= 0: pid=2408838: Sun Jul 14 10:28:52 2024 00:20:08.116 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:20:08.116 slat (nsec): min=6300, max=58165, avg=7620.53, stdev=1505.37 00:20:08.116 clat (usec): min=190, max=547, avg=283.28, stdev=61.53 00:20:08.116 lat (usec): min=198, max=554, avg=290.90, stdev=61.66 00:20:08.116 clat percentiles (usec): 00:20:08.116 | 1.00th=[ 210], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 237], 00:20:08.116 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 265], 60.00th=[ 281], 00:20:08.116 | 70.00th=[ 293], 80.00th=[ 318], 90.00th=[ 363], 95.00th=[ 429], 00:20:08.116 | 99.00th=[ 506], 99.50th=[ 515], 99.90th=[ 523], 99.95th=[ 545], 00:20:08.116 | 99.99th=[ 545] 00:20:08.116 write: IOPS=2303, BW=9215KiB/s (9436kB/s)(9224KiB/1001msec); 0 zone resets 00:20:08.116 slat (nsec): min=9340, max=61792, avg=11041.80, stdev=1749.16 00:20:08.116 clat (usec): min=112, max=300, avg=158.67, stdev=33.39 00:20:08.116 lat (usec): min=123, max=356, avg=169.71, stdev=33.68 00:20:08.116 clat percentiles (usec): 00:20:08.116 | 1.00th=[ 120], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 131], 00:20:08.116 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 145], 60.00th=[ 163], 00:20:08.116 | 70.00th=[ 178], 80.00th=[ 188], 90.00th=[ 200], 95.00th=[ 229], 00:20:08.116 | 99.00th=[ 249], 99.50th=[ 265], 99.90th=[ 297], 99.95th=[ 297], 00:20:08.116 | 99.99th=[ 302] 00:20:08.116 bw ( KiB/s): min= 8288, max= 8288, per=33.63%, avg=8288.00, stdev= 0.00, samples=1 00:20:08.116 iops : min= 2072, max= 2072, avg=2072.00, stdev= 0.00, samples=1 00:20:08.116 lat (usec) : 250=69.80%, 500=29.56%, 750=0.64% 00:20:08.116 cpu : usr=2.00%, sys=4.60%, ctx=4355, majf=0, minf=1 00:20:08.116 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:08.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.116 issued rwts: total=2048,2306,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:08.116 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:08.116 00:20:08.116 Run status group 0 (all jobs): 00:20:08.116 READ: bw=17.8MiB/s (18.7MB/s), 196KiB/s-8184KiB/s (201kB/s-8380kB/s), io=18.5MiB (19.4MB), run=1001-1039msec 00:20:08.116 WRITE: bw=24.1MiB/s (25.2MB/s), 1971KiB/s-9215KiB/s (2018kB/s-9436kB/s), io=25.0MiB (26.2MB), run=1001-1039msec 00:20:08.116 00:20:08.116 Disk stats (read/write): 00:20:08.116 nvme0n1: ios=67/512, merge=0/0, ticks=758/73, in_queue=831, util=86.67% 00:20:08.116 nvme0n2: ios=1578/1536, merge=0/0, ticks=823/253, in_queue=1076, util=90.65% 00:20:08.116 nvme0n3: ios=1084/1536, merge=0/0, ticks=609/261, in_queue=870, util=94.69% 00:20:08.116 nvme0n4: ios=1699/2048, merge=0/0, ticks=536/302, in_queue=838, util=95.38% 00:20:08.116 10:28:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:20:08.116 [global] 00:20:08.116 thread=1 00:20:08.116 invalidate=1 00:20:08.116 rw=randwrite 00:20:08.116 time_based=1 00:20:08.116 runtime=1 00:20:08.116 ioengine=libaio 00:20:08.116 direct=1 00:20:08.116 bs=4096 00:20:08.116 iodepth=1 00:20:08.116 norandommap=0 00:20:08.116 numjobs=1 00:20:08.116 00:20:08.116 verify_dump=1 00:20:08.116 verify_backlog=512 00:20:08.116 verify_state_save=0 00:20:08.116 do_verify=1 00:20:08.116 verify=crc32c-intel 00:20:08.116 [job0] 00:20:08.116 filename=/dev/nvme0n1 00:20:08.116 [job1] 00:20:08.116 filename=/dev/nvme0n2 00:20:08.116 [job2] 00:20:08.116 filename=/dev/nvme0n3 00:20:08.116 [job3] 00:20:08.116 filename=/dev/nvme0n4 00:20:08.116 Could not set queue depth (nvme0n1) 00:20:08.116 Could not set queue depth (nvme0n2) 00:20:08.116 Could not set queue depth (nvme0n3) 00:20:08.116 Could not set queue depth (nvme0n4) 00:20:08.380 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:08.380 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:08.380 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:08.380 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:08.380 fio-3.35 00:20:08.380 Starting 4 threads 00:20:09.755 00:20:09.755 job0: (groupid=0, jobs=1): err= 0: pid=2409254: Sun Jul 14 10:28:54 2024 00:20:09.755 read: IOPS=504, BW=2019KiB/s (2068kB/s)(2096KiB/1038msec) 00:20:09.755 slat (nsec): min=7300, max=28132, avg=8842.38, stdev=2725.40 00:20:09.755 clat (usec): min=201, max=41060, avg=1561.03, stdev=7218.18 00:20:09.755 lat (usec): min=210, max=41078, avg=1569.87, stdev=7219.85 00:20:09.755 clat percentiles (usec): 00:20:09.755 | 1.00th=[ 208], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 229], 00:20:09.755 | 30.00th=[ 235], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:20:09.755 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[ 269], 00:20:09.755 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:20:09.755 | 99.99th=[41157] 00:20:09.755 write: IOPS=986, BW=3946KiB/s (4041kB/s)(4096KiB/1038msec); 0 zone resets 00:20:09.755 slat (nsec): min=10483, max=39786, avg=12008.74, stdev=2298.67 00:20:09.755 clat (usec): min=124, max=371, avg=192.81, stdev=41.09 00:20:09.755 lat (usec): min=135, max=401, avg=204.82, stdev=41.45 00:20:09.755 clat percentiles (usec): 00:20:09.755 | 1.00th=[ 130], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 153], 00:20:09.755 | 30.00th=[ 159], 40.00th=[ 167], 50.00th=[ 194], 60.00th=[ 208], 00:20:09.755 | 70.00th=[ 219], 80.00th=[ 231], 90.00th=[ 247], 95.00th=[ 265], 00:20:09.755 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 363], 99.95th=[ 371], 00:20:09.755 | 99.99th=[ 371] 00:20:09.755 bw ( KiB/s): min= 1856, max= 6336, per=15.97%, avg=4096.00, stdev=3167.84, samples=2 00:20:09.755 iops : min= 464, max= 1584, avg=1024.00, stdev=791.96, samples=2 00:20:09.755 lat (usec) : 250=84.82%, 500=14.08% 00:20:09.755 lat (msec) : 50=1.10% 00:20:09.755 cpu : usr=1.64%, sys=1.45%, ctx=1550, majf=0, minf=1 00:20:09.755 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:09.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.755 issued rwts: total=524,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.755 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:09.755 job1: (groupid=0, jobs=1): err= 0: pid=2409269: Sun Jul 14 10:28:54 2024 00:20:09.755 read: IOPS=1791, BW=7165KiB/s (7337kB/s)(7172KiB/1001msec) 00:20:09.755 slat (nsec): min=7086, max=22002, avg=8096.52, stdev=1155.83 00:20:09.755 clat (usec): min=210, max=40656, avg=326.24, stdev=956.15 00:20:09.755 lat (usec): min=217, max=40663, avg=334.33, stdev=956.15 00:20:09.755 clat percentiles (usec): 00:20:09.755 | 1.00th=[ 223], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 247], 00:20:09.755 | 30.00th=[ 253], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 285], 00:20:09.755 | 70.00th=[ 310], 80.00th=[ 363], 90.00th=[ 449], 95.00th=[ 478], 00:20:09.755 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 652], 99.95th=[40633], 00:20:09.755 | 99.99th=[40633] 00:20:09.755 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:20:09.755 slat (nsec): min=10387, max=43154, avg=11686.43, stdev=1699.17 00:20:09.755 clat (usec): min=127, max=1154, avg=177.65, stdev=41.96 00:20:09.755 lat (usec): min=138, max=1167, avg=189.34, stdev=42.06 00:20:09.755 clat percentiles (usec): 00:20:09.755 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:20:09.755 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 174], 00:20:09.755 | 70.00th=[ 190], 80.00th=[ 210], 90.00th=[ 235], 95.00th=[ 249], 00:20:09.755 | 99.00th=[ 281], 99.50th=[ 306], 99.90th=[ 318], 99.95th=[ 322], 00:20:09.755 | 99.99th=[ 1156] 00:20:09.755 bw ( KiB/s): min= 8192, max= 8192, per=31.94%, avg=8192.00, stdev= 0.00, samples=1 00:20:09.755 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:20:09.755 lat (usec) : 250=62.20%, 500=36.81%, 750=0.94% 00:20:09.755 lat (msec) : 2=0.03%, 50=0.03% 00:20:09.755 cpu : usr=3.00%, sys=6.40%, ctx=3842, majf=0, minf=1 00:20:09.755 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:09.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.755 issued rwts: total=1793,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.755 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:09.755 job2: (groupid=0, jobs=1): err= 0: pid=2409289: Sun Jul 14 10:28:54 2024 00:20:09.755 read: IOPS=1013, BW=4055KiB/s (4153kB/s)(4104KiB/1012msec) 00:20:09.755 slat (nsec): min=6650, max=39436, avg=8384.13, stdev=2081.26 00:20:09.755 clat (usec): min=211, max=41064, avg=686.94, stdev=4189.49 00:20:09.755 lat (usec): min=219, max=41087, avg=695.33, stdev=4190.76 00:20:09.755 clat percentiles (usec): 00:20:09.755 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 233], 20.00th=[ 239], 00:20:09.755 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 251], 00:20:09.755 | 70.00th=[ 255], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 281], 00:20:09.755 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:20:09.755 | 99.99th=[41157] 00:20:09.755 write: IOPS=1517, BW=6071KiB/s (6217kB/s)(6144KiB/1012msec); 0 zone resets 00:20:09.755 slat (nsec): min=9341, max=46250, avg=11694.62, stdev=2127.74 00:20:09.755 clat (usec): min=134, max=280, avg=176.56, stdev=17.38 00:20:09.755 lat (usec): min=147, max=294, avg=188.26, stdev=17.80 00:20:09.755 clat percentiles (usec): 00:20:09.755 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:20:09.755 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:20:09.755 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 196], 95.00th=[ 202], 00:20:09.755 | 99.00th=[ 249], 99.50th=[ 269], 99.90th=[ 281], 99.95th=[ 281], 00:20:09.755 | 99.99th=[ 281] 00:20:09.755 bw ( KiB/s): min= 4096, max= 8192, per=23.95%, avg=6144.00, stdev=2896.31, samples=2 00:20:09.755 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:20:09.755 lat (usec) : 250=82.55%, 500=17.02% 00:20:09.755 lat (msec) : 50=0.43% 00:20:09.755 cpu : usr=2.27%, sys=3.36%, ctx=2564, majf=0, minf=2 00:20:09.755 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:09.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.755 issued rwts: total=1026,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.755 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:09.755 job3: (groupid=0, jobs=1): err= 0: pid=2409295: Sun Jul 14 10:28:54 2024 00:20:09.755 read: IOPS=1643, BW=6573KiB/s (6731kB/s)(6580KiB/1001msec) 00:20:09.755 slat (nsec): min=7189, max=23807, avg=8002.60, stdev=1012.80 00:20:09.755 clat (usec): min=204, max=41069, avg=356.33, stdev=1415.44 00:20:09.755 lat (usec): min=212, max=41077, avg=364.33, stdev=1415.44 00:20:09.755 clat percentiles (usec): 00:20:09.755 | 1.00th=[ 215], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 247], 00:20:09.755 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 269], 60.00th=[ 285], 00:20:09.755 | 70.00th=[ 338], 80.00th=[ 371], 90.00th=[ 449], 95.00th=[ 490], 00:20:09.755 | 99.00th=[ 529], 99.50th=[ 594], 99.90th=[40633], 99.95th=[41157], 00:20:09.755 | 99.99th=[41157] 00:20:09.755 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:20:09.755 slat (nsec): min=10122, max=41021, avg=11307.49, stdev=1778.92 00:20:09.755 clat (usec): min=118, max=371, avg=178.93, stdev=37.69 00:20:09.755 lat (usec): min=136, max=382, avg=190.24, stdev=37.86 00:20:09.755 clat percentiles (usec): 00:20:09.755 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:20:09.755 | 30.00th=[ 153], 40.00th=[ 161], 50.00th=[ 167], 60.00th=[ 176], 00:20:09.755 | 70.00th=[ 192], 80.00th=[ 212], 90.00th=[ 241], 95.00th=[ 243], 00:20:09.755 | 99.00th=[ 285], 99.50th=[ 302], 99.90th=[ 363], 99.95th=[ 367], 00:20:09.755 | 99.99th=[ 371] 00:20:09.755 bw ( KiB/s): min= 8192, max= 8192, per=31.94%, avg=8192.00, stdev= 0.00, samples=1 00:20:09.755 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:20:09.755 lat (usec) : 250=64.91%, 500=33.71%, 750=1.33% 00:20:09.755 lat (msec) : 50=0.05% 00:20:09.755 cpu : usr=4.00%, sys=4.90%, ctx=3693, majf=0, minf=1 00:20:09.755 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:09.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.755 issued rwts: total=1645,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.755 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:09.755 00:20:09.755 Run status group 0 (all jobs): 00:20:09.755 READ: bw=18.8MiB/s (19.7MB/s), 2019KiB/s-7165KiB/s (2068kB/s-7337kB/s), io=19.5MiB (20.4MB), run=1001-1038msec 00:20:09.755 WRITE: bw=25.0MiB/s (26.3MB/s), 3946KiB/s-8184KiB/s (4041kB/s-8380kB/s), io=26.0MiB (27.3MB), run=1001-1038msec 00:20:09.755 00:20:09.755 Disk stats (read/write): 00:20:09.755 nvme0n1: ios=543/1024, merge=0/0, ticks=1589/183, in_queue=1772, util=97.70% 00:20:09.755 nvme0n2: ios=1560/1749, merge=0/0, ticks=1470/294, in_queue=1764, util=98.38% 00:20:09.755 nvme0n3: ios=1048/1155, merge=0/0, ticks=1604/184, in_queue=1788, util=97.81% 00:20:09.755 nvme0n4: ios=1473/1536, merge=0/0, ticks=524/256, in_queue=780, util=89.72% 00:20:09.755 10:28:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:20:09.755 [global] 00:20:09.755 thread=1 00:20:09.755 invalidate=1 00:20:09.755 rw=write 00:20:09.755 time_based=1 00:20:09.755 runtime=1 00:20:09.755 ioengine=libaio 00:20:09.755 direct=1 00:20:09.755 bs=4096 00:20:09.755 iodepth=128 00:20:09.755 norandommap=0 00:20:09.755 numjobs=1 00:20:09.755 00:20:09.755 verify_dump=1 00:20:09.755 verify_backlog=512 00:20:09.755 verify_state_save=0 00:20:09.755 do_verify=1 00:20:09.755 verify=crc32c-intel 00:20:09.755 [job0] 00:20:09.755 filename=/dev/nvme0n1 00:20:09.755 [job1] 00:20:09.755 filename=/dev/nvme0n2 00:20:09.755 [job2] 00:20:09.755 filename=/dev/nvme0n3 00:20:09.755 [job3] 00:20:09.755 filename=/dev/nvme0n4 00:20:09.755 Could not set queue depth (nvme0n1) 00:20:09.756 Could not set queue depth (nvme0n2) 00:20:09.756 Could not set queue depth (nvme0n3) 00:20:09.756 Could not set queue depth (nvme0n4) 00:20:10.013 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:10.013 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:10.013 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:10.013 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:10.013 fio-3.35 00:20:10.013 Starting 4 threads 00:20:11.391 00:20:11.391 job0: (groupid=0, jobs=1): err= 0: pid=2409701: Sun Jul 14 10:28:56 2024 00:20:11.391 read: IOPS=6796, BW=26.5MiB/s (27.8MB/s)(26.7MiB/1005msec) 00:20:11.391 slat (nsec): min=1155, max=12848k, avg=78459.79, stdev=595760.19 00:20:11.391 clat (usec): min=2614, max=26337, avg=9644.00, stdev=2643.84 00:20:11.391 lat (usec): min=3114, max=28946, avg=9722.46, stdev=2690.45 00:20:11.391 clat percentiles (usec): 00:20:11.391 | 1.00th=[ 4080], 5.00th=[ 7046], 10.00th=[ 7373], 20.00th=[ 7701], 00:20:11.391 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 9372], 60.00th=[ 9634], 00:20:11.391 | 70.00th=[ 9896], 80.00th=[10945], 90.00th=[13304], 95.00th=[15401], 00:20:11.391 | 99.00th=[18220], 99.50th=[21890], 99.90th=[21890], 99.95th=[21890], 00:20:11.391 | 99.99th=[26346] 00:20:11.391 write: IOPS=7132, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1005msec); 0 zone resets 00:20:11.391 slat (usec): min=2, max=12554, avg=60.34, stdev=373.21 00:20:11.391 clat (usec): min=1410, max=26493, avg=8584.22, stdev=2538.33 00:20:11.391 lat (usec): min=1473, max=26518, avg=8644.56, stdev=2573.71 00:20:11.391 clat percentiles (usec): 00:20:11.391 | 1.00th=[ 2933], 5.00th=[ 4424], 10.00th=[ 5997], 20.00th=[ 7242], 00:20:11.391 | 30.00th=[ 7898], 40.00th=[ 8225], 50.00th=[ 8356], 60.00th=[ 8586], 00:20:11.391 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[10290], 95.00th=[10421], 00:20:11.391 | 99.00th=[18220], 99.50th=[23462], 99.90th=[23462], 99.95th=[23462], 00:20:11.391 | 99.99th=[26608] 00:20:11.391 bw ( KiB/s): min=24576, max=32768, per=43.15%, avg=28672.00, stdev=5792.62, samples=2 00:20:11.391 iops : min= 6144, max= 8192, avg=7168.00, stdev=1448.15, samples=2 00:20:11.391 lat (msec) : 2=0.05%, 4=2.49%, 10=72.41%, 20=24.25%, 50=0.81% 00:20:11.391 cpu : usr=4.88%, sys=5.58%, ctx=787, majf=0, minf=1 00:20:11.391 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:20:11.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:11.391 issued rwts: total=6830,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.391 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:11.391 job1: (groupid=0, jobs=1): err= 0: pid=2409725: Sun Jul 14 10:28:56 2024 00:20:11.391 read: IOPS=2801, BW=10.9MiB/s (11.5MB/s)(11.5MiB/1048msec) 00:20:11.391 slat (nsec): min=1100, max=30568k, avg=168860.74, stdev=1491307.33 00:20:11.391 clat (usec): min=3546, max=97315, avg=21888.94, stdev=20669.09 00:20:11.391 lat (msec): min=3, max=116, avg=22.06, stdev=20.84 00:20:11.391 clat percentiles (usec): 00:20:11.391 | 1.00th=[ 4113], 5.00th=[ 6652], 10.00th=[ 8979], 20.00th=[ 9896], 00:20:11.391 | 30.00th=[10159], 40.00th=[10552], 50.00th=[11731], 60.00th=[17957], 00:20:11.391 | 70.00th=[19530], 80.00th=[32900], 90.00th=[57934], 95.00th=[67634], 00:20:11.391 | 99.00th=[96994], 99.50th=[96994], 99.90th=[96994], 99.95th=[96994], 00:20:11.391 | 99.99th=[96994] 00:20:11.391 write: IOPS=2931, BW=11.5MiB/s (12.0MB/s)(12.0MiB/1048msec); 0 zone resets 00:20:11.391 slat (nsec): min=1829, max=23928k, avg=157504.94, stdev=1109690.41 00:20:11.391 clat (usec): min=514, max=85006, avg=21559.08, stdev=15240.97 00:20:11.391 lat (usec): min=523, max=85029, avg=21716.58, stdev=15372.77 00:20:11.391 clat percentiles (usec): 00:20:11.391 | 1.00th=[ 873], 5.00th=[ 7111], 10.00th=[ 9634], 20.00th=[10159], 00:20:11.391 | 30.00th=[10290], 40.00th=[10683], 50.00th=[16450], 60.00th=[18744], 00:20:11.391 | 70.00th=[28705], 80.00th=[32113], 90.00th=[45351], 95.00th=[58459], 00:20:11.391 | 99.00th=[61080], 99.50th=[61080], 99.90th=[72877], 99.95th=[80217], 00:20:11.391 | 99.99th=[85459] 00:20:11.391 bw ( KiB/s): min=11984, max=12592, per=18.49%, avg=12288.00, stdev=429.92, samples=2 00:20:11.391 iops : min= 2996, max= 3148, avg=3072.00, stdev=107.48, samples=2 00:20:11.391 lat (usec) : 750=0.25%, 1000=0.27% 00:20:11.391 lat (msec) : 2=0.43%, 4=1.63%, 10=17.31%, 20=48.15%, 50=21.64% 00:20:11.391 lat (msec) : 100=10.32% 00:20:11.391 cpu : usr=1.43%, sys=3.15%, ctx=259, majf=0, minf=1 00:20:11.391 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:20:11.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:11.391 issued rwts: total=2936,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.391 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:11.391 job2: (groupid=0, jobs=1): err= 0: pid=2409759: Sun Jul 14 10:28:56 2024 00:20:11.391 read: IOPS=2925, BW=11.4MiB/s (12.0MB/s)(11.5MiB/1004msec) 00:20:11.391 slat (nsec): min=1395, max=11216k, avg=112794.97, stdev=655969.61 00:20:11.391 clat (usec): min=479, max=56159, avg=15432.01, stdev=8305.61 00:20:11.391 lat (usec): min=485, max=57078, avg=15544.80, stdev=8360.49 00:20:11.391 clat percentiles (usec): 00:20:11.391 | 1.00th=[ 1172], 5.00th=[ 3752], 10.00th=[ 8848], 20.00th=[11207], 00:20:11.391 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11994], 60.00th=[16909], 00:20:11.391 | 70.00th=[18220], 80.00th=[20055], 90.00th=[24249], 95.00th=[27657], 00:20:11.391 | 99.00th=[51119], 99.50th=[54264], 99.90th=[56361], 99.95th=[56361], 00:20:11.391 | 99.99th=[56361] 00:20:11.391 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:20:11.391 slat (usec): min=2, max=34223, avg=205.85, stdev=1364.69 00:20:11.391 clat (msec): min=3, max=118, avg=26.03, stdev=24.54 00:20:11.391 lat (msec): min=3, max=118, avg=26.23, stdev=24.71 00:20:11.391 clat percentiles (msec): 00:20:11.391 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:20:11.391 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 17], 60.00th=[ 22], 00:20:11.391 | 70.00th=[ 23], 80.00th=[ 32], 90.00th=[ 66], 95.00th=[ 88], 00:20:11.391 | 99.00th=[ 113], 99.50th=[ 114], 99.90th=[ 120], 99.95th=[ 120], 00:20:11.391 | 99.99th=[ 120] 00:20:11.391 bw ( KiB/s): min= 8192, max=16384, per=18.49%, avg=12288.00, stdev=5792.62, samples=2 00:20:11.391 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:20:11.391 lat (usec) : 500=0.02%, 750=0.03%, 1000=0.05% 00:20:11.391 lat (msec) : 2=1.58%, 4=1.25%, 10=5.76%, 20=59.83%, 50=23.53% 00:20:11.391 lat (msec) : 100=6.37%, 250=1.58% 00:20:11.391 cpu : usr=2.49%, sys=2.79%, ctx=344, majf=0, minf=1 00:20:11.391 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:20:11.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:11.391 issued rwts: total=2937,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.391 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:11.391 job3: (groupid=0, jobs=1): err= 0: pid=2409766: Sun Jul 14 10:28:56 2024 00:20:11.391 read: IOPS=3651, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1007msec) 00:20:11.391 slat (nsec): min=1329, max=14253k, avg=125179.95, stdev=911884.91 00:20:11.391 clat (usec): min=3286, max=38846, avg=15873.70, stdev=5518.63 00:20:11.391 lat (usec): min=3887, max=40081, avg=15998.88, stdev=5601.70 00:20:11.391 clat percentiles (usec): 00:20:11.391 | 1.00th=[ 7177], 5.00th=[10290], 10.00th=[10683], 20.00th=[11863], 00:20:11.391 | 30.00th=[12387], 40.00th=[13173], 50.00th=[13698], 60.00th=[15795], 00:20:11.391 | 70.00th=[18220], 80.00th=[19530], 90.00th=[22676], 95.00th=[28181], 00:20:11.391 | 99.00th=[33817], 99.50th=[35914], 99.90th=[38011], 99.95th=[38011], 00:20:11.391 | 99.99th=[39060] 00:20:11.391 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:20:11.391 slat (usec): min=2, max=14698, avg=122.61, stdev=742.43 00:20:11.392 clat (usec): min=4285, max=42297, avg=16920.61, stdev=8448.22 00:20:11.392 lat (usec): min=4296, max=44082, avg=17043.22, stdev=8523.62 00:20:11.392 clat percentiles (usec): 00:20:11.392 | 1.00th=[ 6390], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9896], 00:20:11.392 | 30.00th=[10683], 40.00th=[11469], 50.00th=[12649], 60.00th=[16450], 00:20:11.392 | 70.00th=[21103], 80.00th=[25560], 90.00th=[31065], 95.00th=[33817], 00:20:11.392 | 99.00th=[36963], 99.50th=[38011], 99.90th=[42206], 99.95th=[42206], 00:20:11.392 | 99.99th=[42206] 00:20:11.392 bw ( KiB/s): min=14640, max=17848, per=24.45%, avg=16244.00, stdev=2268.40, samples=2 00:20:11.392 iops : min= 3660, max= 4462, avg=4061.00, stdev=567.10, samples=2 00:20:11.392 lat (msec) : 4=0.03%, 10=14.45%, 20=61.11%, 50=24.42% 00:20:11.392 cpu : usr=3.48%, sys=5.77%, ctx=316, majf=0, minf=1 00:20:11.392 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:11.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:11.392 issued rwts: total=3677,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.392 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:11.392 00:20:11.392 Run status group 0 (all jobs): 00:20:11.392 READ: bw=61.1MiB/s (64.0MB/s), 10.9MiB/s-26.5MiB/s (11.5MB/s-27.8MB/s), io=64.0MiB (67.1MB), run=1004-1048msec 00:20:11.392 WRITE: bw=64.9MiB/s (68.0MB/s), 11.5MiB/s-27.9MiB/s (12.0MB/s-29.2MB/s), io=68.0MiB (71.3MB), run=1004-1048msec 00:20:11.392 00:20:11.392 Disk stats (read/write): 00:20:11.392 nvme0n1: ios=5681/6095, merge=0/0, ticks=50191/48384, in_queue=98575, util=82.16% 00:20:11.392 nvme0n2: ios=2584/3027, merge=0/0, ticks=28454/43929, in_queue=72383, util=91.15% 00:20:11.392 nvme0n3: ios=1803/2048, merge=0/0, ticks=10408/23007, in_queue=33415, util=89.70% 00:20:11.392 nvme0n4: ios=2617/3072, merge=0/0, ticks=41991/56893, in_queue=98884, util=94.25% 00:20:11.392 10:28:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:20:11.392 [global] 00:20:11.392 thread=1 00:20:11.392 invalidate=1 00:20:11.392 rw=randwrite 00:20:11.392 time_based=1 00:20:11.392 runtime=1 00:20:11.392 ioengine=libaio 00:20:11.392 direct=1 00:20:11.392 bs=4096 00:20:11.392 iodepth=128 00:20:11.392 norandommap=0 00:20:11.392 numjobs=1 00:20:11.392 00:20:11.392 verify_dump=1 00:20:11.392 verify_backlog=512 00:20:11.392 verify_state_save=0 00:20:11.392 do_verify=1 00:20:11.392 verify=crc32c-intel 00:20:11.392 [job0] 00:20:11.392 filename=/dev/nvme0n1 00:20:11.392 [job1] 00:20:11.392 filename=/dev/nvme0n2 00:20:11.392 [job2] 00:20:11.392 filename=/dev/nvme0n3 00:20:11.392 [job3] 00:20:11.392 filename=/dev/nvme0n4 00:20:11.392 Could not set queue depth (nvme0n1) 00:20:11.392 Could not set queue depth (nvme0n2) 00:20:11.392 Could not set queue depth (nvme0n3) 00:20:11.392 Could not set queue depth (nvme0n4) 00:20:11.650 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:11.650 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:11.650 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:11.650 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:11.650 fio-3.35 00:20:11.650 Starting 4 threads 00:20:13.030 00:20:13.030 job0: (groupid=0, jobs=1): err= 0: pid=2410141: Sun Jul 14 10:28:57 2024 00:20:13.030 read: IOPS=4405, BW=17.2MiB/s (18.0MB/s)(17.2MiB/1002msec) 00:20:13.030 slat (nsec): min=1285, max=47049k, avg=115138.24, stdev=946600.53 00:20:13.030 clat (usec): min=621, max=60789, avg=14345.91, stdev=10472.09 00:20:13.030 lat (usec): min=2344, max=60798, avg=14461.05, stdev=10510.62 00:20:13.030 clat percentiles (usec): 00:20:13.030 | 1.00th=[ 4752], 5.00th=[ 7111], 10.00th=[ 8094], 20.00th=[ 8586], 00:20:13.030 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[10159], 60.00th=[11076], 00:20:13.030 | 70.00th=[16450], 80.00th=[17433], 90.00th=[24249], 95.00th=[36439], 00:20:13.030 | 99.00th=[60556], 99.50th=[60556], 99.90th=[60556], 99.95th=[60556], 00:20:13.030 | 99.99th=[60556] 00:20:13.030 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:20:13.030 slat (usec): min=2, max=8587, avg=103.07, stdev=480.97 00:20:13.030 clat (usec): min=5458, max=48530, avg=13575.50, stdev=8660.60 00:20:13.030 lat (usec): min=5464, max=48540, avg=13678.57, stdev=8709.15 00:20:13.030 clat percentiles (usec): 00:20:13.030 | 1.00th=[ 6390], 5.00th=[ 7504], 10.00th=[ 7832], 20.00th=[ 8160], 00:20:13.030 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[10028], 60.00th=[10290], 00:20:13.030 | 70.00th=[16057], 80.00th=[17433], 90.00th=[24249], 95.00th=[32900], 00:20:13.030 | 99.00th=[48497], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:20:13.030 | 99.99th=[48497] 00:20:13.030 bw ( KiB/s): min=18240, max=18624, per=28.98%, avg=18432.00, stdev=271.53, samples=2 00:20:13.030 iops : min= 4560, max= 4656, avg=4608.00, stdev=67.88, samples=2 00:20:13.030 lat (usec) : 750=0.01% 00:20:13.030 lat (msec) : 4=0.35%, 10=48.77%, 20=37.75%, 50=11.70%, 100=1.41% 00:20:13.030 cpu : usr=2.60%, sys=3.00%, ctx=601, majf=0, minf=1 00:20:13.030 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:20:13.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:13.030 issued rwts: total=4414,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.030 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:13.030 job1: (groupid=0, jobs=1): err= 0: pid=2410142: Sun Jul 14 10:28:57 2024 00:20:13.030 read: IOPS=2354, BW=9419KiB/s (9646kB/s)(9476KiB/1006msec) 00:20:13.030 slat (nsec): min=1137, max=20663k, avg=220539.55, stdev=1305379.04 00:20:13.030 clat (usec): min=1882, max=69961, avg=28044.67, stdev=12810.63 00:20:13.030 lat (usec): min=8883, max=69969, avg=28265.21, stdev=12836.87 00:20:13.030 clat percentiles (usec): 00:20:13.030 | 1.00th=[ 8848], 5.00th=[13304], 10.00th=[14877], 20.00th=[17171], 00:20:13.030 | 30.00th=[19006], 40.00th=[22676], 50.00th=[25560], 60.00th=[30016], 00:20:13.030 | 70.00th=[31851], 80.00th=[36439], 90.00th=[42730], 95.00th=[61604], 00:20:13.030 | 99.00th=[69731], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:20:13.030 | 99.99th=[69731] 00:20:13.030 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:20:13.030 slat (usec): min=2, max=9492, avg=179.10, stdev=815.23 00:20:13.030 clat (usec): min=9529, max=61437, avg=23844.17, stdev=11748.85 00:20:13.030 lat (usec): min=10813, max=62123, avg=24023.27, stdev=11808.52 00:20:13.030 clat percentiles (usec): 00:20:13.030 | 1.00th=[11207], 5.00th=[12256], 10.00th=[12387], 20.00th=[14484], 00:20:13.030 | 30.00th=[15270], 40.00th=[17957], 50.00th=[19530], 60.00th=[25035], 00:20:13.030 | 70.00th=[26084], 80.00th=[31327], 90.00th=[41157], 95.00th=[51643], 00:20:13.030 | 99.00th=[57410], 99.50th=[58983], 99.90th=[61604], 99.95th=[61604], 00:20:13.030 | 99.99th=[61604] 00:20:13.030 bw ( KiB/s): min= 9680, max=10800, per=16.10%, avg=10240.00, stdev=791.96, samples=2 00:20:13.030 iops : min= 2420, max= 2700, avg=2560.00, stdev=197.99, samples=2 00:20:13.030 lat (msec) : 2=0.02%, 10=1.66%, 20=40.33%, 50=52.12%, 100=5.86% 00:20:13.030 cpu : usr=1.59%, sys=4.08%, ctx=269, majf=0, minf=1 00:20:13.030 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:20:13.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:13.030 issued rwts: total=2369,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.030 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:13.030 job2: (groupid=0, jobs=1): err= 0: pid=2410143: Sun Jul 14 10:28:57 2024 00:20:13.030 read: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec) 00:20:13.030 slat (nsec): min=1271, max=15160k, avg=103884.84, stdev=769830.17 00:20:13.030 clat (usec): min=3520, max=40854, avg=12899.59, stdev=6590.38 00:20:13.030 lat (usec): min=3525, max=40860, avg=13003.47, stdev=6641.17 00:20:13.030 clat percentiles (usec): 00:20:13.030 | 1.00th=[ 4555], 5.00th=[ 7439], 10.00th=[ 8455], 20.00th=[ 9634], 00:20:13.030 | 30.00th=[10028], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:20:13.030 | 70.00th=[11863], 80.00th=[13829], 90.00th=[21627], 95.00th=[31589], 00:20:13.030 | 99.00th=[36439], 99.50th=[40633], 99.90th=[40633], 99.95th=[40633], 00:20:13.030 | 99.99th=[40633] 00:20:13.030 write: IOPS=4750, BW=18.6MiB/s (19.5MB/s)(18.7MiB/1010msec); 0 zone resets 00:20:13.030 slat (usec): min=2, max=8978, avg=95.32, stdev=517.04 00:20:13.030 clat (usec): min=429, max=35943, avg=14229.44, stdev=6462.83 00:20:13.030 lat (usec): min=443, max=35951, avg=14324.76, stdev=6493.74 00:20:13.030 clat percentiles (usec): 00:20:13.030 | 1.00th=[ 1827], 5.00th=[ 4686], 10.00th=[ 6783], 20.00th=[ 8979], 00:20:13.030 | 30.00th=[10683], 40.00th=[11076], 50.00th=[14877], 60.00th=[16909], 00:20:13.030 | 70.00th=[17171], 80.00th=[17433], 90.00th=[20579], 95.00th=[29230], 00:20:13.030 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:20:13.030 | 99.99th=[35914] 00:20:13.030 bw ( KiB/s): min=17072, max=20296, per=29.37%, avg=18684.00, stdev=2279.71, samples=2 00:20:13.030 iops : min= 4268, max= 5074, avg=4671.00, stdev=569.93, samples=2 00:20:13.030 lat (usec) : 500=0.03%, 1000=0.33% 00:20:13.030 lat (msec) : 2=0.33%, 4=1.47%, 10=23.94%, 20=62.52%, 50=11.38% 00:20:13.030 cpu : usr=3.77%, sys=4.66%, ctx=509, majf=0, minf=1 00:20:13.030 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:20:13.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:13.030 issued rwts: total=4608,4798,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.030 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:13.031 job3: (groupid=0, jobs=1): err= 0: pid=2410144: Sun Jul 14 10:28:57 2024 00:20:13.031 read: IOPS=3779, BW=14.8MiB/s (15.5MB/s)(14.8MiB/1005msec) 00:20:13.031 slat (nsec): min=1398, max=10044k, avg=109434.85, stdev=677471.70 00:20:13.031 clat (usec): min=1789, max=28687, avg=13689.67, stdev=3032.73 00:20:13.031 lat (usec): min=4718, max=32437, avg=13799.10, stdev=3083.57 00:20:13.031 clat percentiles (usec): 00:20:13.031 | 1.00th=[ 4948], 5.00th=[ 9503], 10.00th=[10290], 20.00th=[11207], 00:20:13.031 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13304], 60.00th=[13960], 00:20:13.031 | 70.00th=[14615], 80.00th=[15664], 90.00th=[17957], 95.00th=[19792], 00:20:13.031 | 99.00th=[22414], 99.50th=[25035], 99.90th=[27919], 99.95th=[27919], 00:20:13.031 | 99.99th=[28705] 00:20:13.031 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:20:13.031 slat (usec): min=2, max=9132, avg=136.80, stdev=657.10 00:20:13.031 clat (usec): min=5535, max=71975, avg=18313.73, stdev=10447.44 00:20:13.031 lat (usec): min=5540, max=71988, avg=18450.53, stdev=10513.13 00:20:13.031 clat percentiles (usec): 00:20:13.031 | 1.00th=[ 8356], 5.00th=[10159], 10.00th=[10290], 20.00th=[10814], 00:20:13.031 | 30.00th=[11994], 40.00th=[12911], 50.00th=[15008], 60.00th=[17433], 00:20:13.031 | 70.00th=[19006], 80.00th=[23725], 90.00th=[29754], 95.00th=[38536], 00:20:13.031 | 99.00th=[68682], 99.50th=[69731], 99.90th=[71828], 99.95th=[71828], 00:20:13.031 | 99.99th=[71828] 00:20:13.031 bw ( KiB/s): min=16224, max=16384, per=25.63%, avg=16304.00, stdev=113.14, samples=2 00:20:13.031 iops : min= 4056, max= 4096, avg=4076.00, stdev=28.28, samples=2 00:20:13.031 lat (msec) : 2=0.01%, 10=5.61%, 20=78.27%, 50=14.90%, 100=1.20% 00:20:13.031 cpu : usr=3.78%, sys=4.78%, ctx=391, majf=0, minf=1 00:20:13.031 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:13.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:13.031 issued rwts: total=3798,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.031 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:13.031 00:20:13.031 Run status group 0 (all jobs): 00:20:13.031 READ: bw=58.7MiB/s (61.6MB/s), 9419KiB/s-17.8MiB/s (9646kB/s-18.7MB/s), io=59.3MiB (62.2MB), run=1002-1010msec 00:20:13.031 WRITE: bw=62.1MiB/s (65.1MB/s), 9.94MiB/s-18.6MiB/s (10.4MB/s-19.5MB/s), io=62.7MiB (65.8MB), run=1002-1010msec 00:20:13.031 00:20:13.031 Disk stats (read/write): 00:20:13.031 nvme0n1: ios=3473/3584, merge=0/0, ticks=13728/13133, in_queue=26861, util=90.98% 00:20:13.031 nvme0n2: ios=2098/2272, merge=0/0, ticks=16143/28324, in_queue=44467, util=93.10% 00:20:13.031 nvme0n3: ios=3641/4096, merge=0/0, ticks=36854/48993, in_queue=85847, util=94.49% 00:20:13.031 nvme0n4: ios=3623/3631, merge=0/0, ticks=25379/27231, in_queue=52610, util=96.54% 00:20:13.031 10:28:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:20:13.031 10:28:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2410321 00:20:13.031 10:28:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:20:13.031 10:28:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:20:13.031 [global] 00:20:13.031 thread=1 00:20:13.031 invalidate=1 00:20:13.031 rw=read 00:20:13.031 time_based=1 00:20:13.031 runtime=10 00:20:13.031 ioengine=libaio 00:20:13.031 direct=1 00:20:13.031 bs=4096 00:20:13.031 iodepth=1 00:20:13.031 norandommap=1 00:20:13.031 numjobs=1 00:20:13.031 00:20:13.031 [job0] 00:20:13.031 filename=/dev/nvme0n1 00:20:13.031 [job1] 00:20:13.031 filename=/dev/nvme0n2 00:20:13.031 [job2] 00:20:13.031 filename=/dev/nvme0n3 00:20:13.031 [job3] 00:20:13.031 filename=/dev/nvme0n4 00:20:13.031 Could not set queue depth (nvme0n1) 00:20:13.031 Could not set queue depth (nvme0n2) 00:20:13.031 Could not set queue depth (nvme0n3) 00:20:13.031 Could not set queue depth (nvme0n4) 00:20:13.031 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:13.031 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:13.031 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:13.031 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:13.031 fio-3.35 00:20:13.031 Starting 4 threads 00:20:16.349 10:29:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:20:16.349 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=6328320, buflen=4096 00:20:16.349 fio: pid=2410509, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:16.349 10:29:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:20:16.349 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=8871936, buflen=4096 00:20:16.349 fio: pid=2410508, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:16.349 10:29:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:16.349 10:29:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:20:16.349 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=37904384, buflen=4096 00:20:16.349 fio: pid=2410506, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:16.349 10:29:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:16.349 10:29:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:20:16.609 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=32161792, buflen=4096 00:20:16.609 fio: pid=2410507, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:16.609 10:29:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:16.609 10:29:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:20:16.609 00:20:16.609 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2410506: Sun Jul 14 10:29:01 2024 00:20:16.609 read: IOPS=2996, BW=11.7MiB/s (12.3MB/s)(36.1MiB/3089msec) 00:20:16.609 slat (usec): min=6, max=13903, avg= 8.56, stdev=144.46 00:20:16.609 clat (usec): min=169, max=41969, avg=321.81, stdev=2077.12 00:20:16.609 lat (usec): min=176, max=54993, avg=330.37, stdev=2112.09 00:20:16.609 clat percentiles (usec): 00:20:16.609 | 1.00th=[ 182], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 200], 00:20:16.609 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 219], 00:20:16.609 | 70.00th=[ 225], 80.00th=[ 229], 90.00th=[ 237], 95.00th=[ 243], 00:20:16.609 | 99.00th=[ 258], 99.50th=[ 269], 99.90th=[41157], 99.95th=[41157], 00:20:16.609 | 99.99th=[42206] 00:20:16.609 bw ( KiB/s): min= 5376, max=17720, per=58.09%, avg=14784.00, stdev=5308.96, samples=5 00:20:16.609 iops : min= 1344, max= 4430, avg=3696.00, stdev=1327.24, samples=5 00:20:16.609 lat (usec) : 250=97.86%, 500=1.83%, 750=0.01%, 1000=0.02% 00:20:16.609 lat (msec) : 4=0.01%, 50=0.26% 00:20:16.609 cpu : usr=0.78%, sys=2.59%, ctx=9258, majf=0, minf=1 00:20:16.609 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:16.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.609 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.609 issued rwts: total=9255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:16.609 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:16.609 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2410507: Sun Jul 14 10:29:01 2024 00:20:16.609 read: IOPS=2400, BW=9599KiB/s (9829kB/s)(30.7MiB/3272msec) 00:20:16.609 slat (usec): min=7, max=15666, avg=12.28, stdev=221.50 00:20:16.609 clat (usec): min=185, max=41476, avg=399.61, stdev=2419.37 00:20:16.609 lat (usec): min=192, max=44886, avg=411.89, stdev=2438.03 00:20:16.609 clat percentiles (usec): 00:20:16.609 | 1.00th=[ 200], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 225], 00:20:16.609 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 241], 60.00th=[ 249], 00:20:16.609 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 367], 00:20:16.609 | 99.00th=[ 482], 99.50th=[ 519], 99.90th=[41157], 99.95th=[41157], 00:20:16.609 | 99.99th=[41681] 00:20:16.609 bw ( KiB/s): min= 96, max=16744, per=40.04%, avg=10190.83, stdev=5971.73, samples=6 00:20:16.609 iops : min= 24, max= 4186, avg=2547.67, stdev=1492.95, samples=6 00:20:16.609 lat (usec) : 250=60.30%, 500=38.98%, 750=0.34% 00:20:16.609 lat (msec) : 2=0.01%, 50=0.36% 00:20:16.609 cpu : usr=1.31%, sys=3.91%, ctx=7856, majf=0, minf=1 00:20:16.609 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:16.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.609 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.609 issued rwts: total=7853,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:16.609 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:16.609 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2410508: Sun Jul 14 10:29:01 2024 00:20:16.609 read: IOPS=747, BW=2990KiB/s (3061kB/s)(8664KiB/2898msec) 00:20:16.609 slat (nsec): min=6562, max=28590, avg=8882.00, stdev=2934.40 00:20:16.609 clat (usec): min=177, max=41949, avg=1317.35, stdev=6466.43 00:20:16.609 lat (usec): min=184, max=41971, avg=1326.23, stdev=6468.33 00:20:16.609 clat percentiles (usec): 00:20:16.609 | 1.00th=[ 194], 5.00th=[ 212], 10.00th=[ 223], 20.00th=[ 233], 00:20:16.609 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 255], 00:20:16.609 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 371], 95.00th=[ 388], 00:20:16.609 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:20:16.609 | 99.99th=[42206] 00:20:16.609 bw ( KiB/s): min= 96, max= 7096, per=8.42%, avg=2144.00, stdev=3099.31, samples=5 00:20:16.609 iops : min= 24, max= 1774, avg=536.00, stdev=774.83, samples=5 00:20:16.609 lat (usec) : 250=50.62%, 500=46.29%, 750=0.14% 00:20:16.609 lat (msec) : 2=0.32%, 50=2.58% 00:20:16.609 cpu : usr=0.35%, sys=1.07%, ctx=2169, majf=0, minf=1 00:20:16.609 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:16.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.609 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.609 issued rwts: total=2167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:16.609 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:16.609 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2410509: Sun Jul 14 10:29:01 2024 00:20:16.609 read: IOPS=569, BW=2278KiB/s (2333kB/s)(6180KiB/2713msec) 00:20:16.609 slat (nsec): min=7549, max=37732, avg=9137.10, stdev=3079.56 00:20:16.609 clat (usec): min=203, max=41987, avg=1726.82, stdev=7542.79 00:20:16.609 lat (usec): min=212, max=42011, avg=1735.95, stdev=7545.20 00:20:16.609 clat percentiles (usec): 00:20:16.609 | 1.00th=[ 217], 5.00th=[ 229], 10.00th=[ 237], 20.00th=[ 247], 00:20:16.609 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 273], 00:20:16.609 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 351], 95.00th=[ 494], 00:20:16.609 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:20:16.609 | 99.99th=[42206] 00:20:16.609 bw ( KiB/s): min= 96, max= 7552, per=6.26%, avg=1592.00, stdev=3331.75, samples=5 00:20:16.609 iops : min= 24, max= 1888, avg=398.00, stdev=832.94, samples=5 00:20:16.609 lat (usec) : 250=25.29%, 500=69.66%, 750=1.29% 00:20:16.609 lat (msec) : 2=0.13%, 50=3.56% 00:20:16.609 cpu : usr=0.26%, sys=1.03%, ctx=1548, majf=0, minf=2 00:20:16.609 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:16.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.609 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.609 issued rwts: total=1546,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:16.609 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:16.609 00:20:16.609 Run status group 0 (all jobs): 00:20:16.609 READ: bw=24.9MiB/s (26.1MB/s), 2278KiB/s-11.7MiB/s (2333kB/s-12.3MB/s), io=81.3MiB (85.3MB), run=2713-3272msec 00:20:16.609 00:20:16.609 Disk stats (read/write): 00:20:16.609 nvme0n1: ios=9248/0, merge=0/0, ticks=2663/0, in_queue=2663, util=94.99% 00:20:16.609 nvme0n2: ios=7848/0, merge=0/0, ticks=2879/0, in_queue=2879, util=95.24% 00:20:16.609 nvme0n3: ios=2128/0, merge=0/0, ticks=3576/0, in_queue=3576, util=99.26% 00:20:16.609 nvme0n4: ios=1303/0, merge=0/0, ticks=3462/0, in_queue=3462, util=98.85% 00:20:16.609 10:29:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:16.609 10:29:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:20:16.869 10:29:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:16.869 10:29:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:20:17.128 10:29:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:17.128 10:29:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:20:17.387 10:29:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:17.387 10:29:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:20:17.387 10:29:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:20:17.387 10:29:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2410321 00:20:17.387 10:29:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:20:17.387 10:29:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:17.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:17.647 10:29:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:17.647 10:29:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:20:17.647 10:29:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:17.647 10:29:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:17.647 10:29:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:17.647 10:29:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:17.647 10:29:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:20:17.647 10:29:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:20:17.647 10:29:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:20:17.647 nvmf hotplug test: fio failed as expected 00:20:17.647 10:29:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:17.647 10:29:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:20:17.906 10:29:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:20:17.907 10:29:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:20:17.907 10:29:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:20:17.907 10:29:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:20:17.907 10:29:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:17.907 10:29:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:20:17.907 10:29:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:17.907 10:29:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:20:17.907 10:29:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:17.907 10:29:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:17.907 rmmod nvme_tcp 00:20:17.907 rmmod nvme_fabrics 00:20:17.907 rmmod nvme_keyring 00:20:17.907 10:29:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:17.907 10:29:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:20:17.907 10:29:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:20:17.907 10:29:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2407454 ']' 00:20:17.907 10:29:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2407454 00:20:17.907 10:29:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 2407454 ']' 00:20:17.907 10:29:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 2407454 00:20:17.907 10:29:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:20:17.907 10:29:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:17.907 10:29:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2407454 00:20:17.907 10:29:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:17.907 10:29:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:17.907 10:29:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2407454' 00:20:17.907 killing process with pid 2407454 00:20:17.907 10:29:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 2407454 00:20:17.907 10:29:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 2407454 00:20:18.166 10:29:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:18.166 10:29:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:18.166 10:29:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:18.166 10:29:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:18.166 10:29:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:18.166 10:29:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.166 10:29:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:18.166 10:29:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:20.082 10:29:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:20.082 00:20:20.082 real 0m26.752s 00:20:20.082 user 1m46.159s 00:20:20.082 sys 0m8.287s 00:20:20.082 10:29:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:20.082 10:29:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.082 ************************************ 00:20:20.082 END TEST nvmf_fio_target 00:20:20.082 ************************************ 00:20:20.082 10:29:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:20.082 10:29:05 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:20.082 10:29:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:20.082 10:29:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:20.082 10:29:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:20.341 ************************************ 00:20:20.341 START TEST nvmf_bdevio 00:20:20.341 ************************************ 00:20:20.341 10:29:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:20.341 * Looking for test storage... 00:20:20.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:20.341 10:29:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:20.341 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:20:20.341 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:20.341 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:20.341 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:20.341 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:20.341 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:20.341 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:20.341 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:20.341 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:20.341 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:20.341 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:20.341 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:20.341 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:20.341 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:20.341 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:20.341 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:20.341 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:20.341 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:20.341 10:29:05 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:20.341 10:29:05 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:20.341 10:29:05 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:20.341 10:29:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.342 10:29:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.342 10:29:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.342 10:29:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:20:20.342 10:29:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.342 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:20:20.342 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:20.342 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:20.342 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:20.342 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:20.342 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:20.342 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:20.342 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:20.342 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:20.342 10:29:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:20.342 10:29:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:20.342 10:29:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:20:20.342 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:20.342 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:20.342 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:20.342 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:20.342 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:20.342 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.342 10:29:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:20.342 10:29:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:20.342 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:20.342 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:20.342 10:29:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:20:20.342 10:29:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:26.917 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:26.917 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:26.917 Found net devices under 0000:86:00.0: cvl_0_0 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:26.917 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:26.918 Found net devices under 0000:86:00.1: cvl_0_1 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:26.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:20:26.918 00:20:26.918 --- 10.0.0.2 ping statistics --- 00:20:26.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.918 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:26.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:20:26.918 00:20:26.918 --- 10.0.0.1 ping statistics --- 00:20:26.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.918 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2414744 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2414744 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 2414744 ']' 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:26.918 10:29:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:26.918 [2024-07-14 10:29:10.990770] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:20:26.918 [2024-07-14 10:29:10.990813] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.918 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.918 [2024-07-14 10:29:11.062185] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:26.918 [2024-07-14 10:29:11.101497] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.918 [2024-07-14 10:29:11.101540] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.918 [2024-07-14 10:29:11.101547] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.918 [2024-07-14 10:29:11.101553] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.918 [2024-07-14 10:29:11.101558] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.918 [2024-07-14 10:29:11.101676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:26.918 [2024-07-14 10:29:11.101785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:26.918 [2024-07-14 10:29:11.101890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:26.918 [2024-07-14 10:29:11.101891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:26.918 [2024-07-14 10:29:11.837124] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:26.918 Malloc0 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:26.918 [2024-07-14 10:29:11.888540] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:26.918 { 00:20:26.918 "params": { 00:20:26.918 "name": "Nvme$subsystem", 00:20:26.918 "trtype": "$TEST_TRANSPORT", 00:20:26.918 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.918 "adrfam": "ipv4", 00:20:26.918 "trsvcid": "$NVMF_PORT", 00:20:26.918 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.918 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.918 "hdgst": ${hdgst:-false}, 00:20:26.918 "ddgst": ${ddgst:-false} 00:20:26.918 }, 00:20:26.918 "method": "bdev_nvme_attach_controller" 00:20:26.918 } 00:20:26.918 EOF 00:20:26.918 )") 00:20:26.918 10:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:20:27.176 10:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:20:27.176 10:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:20:27.177 10:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:27.177 "params": { 00:20:27.177 "name": "Nvme1", 00:20:27.177 "trtype": "tcp", 00:20:27.177 "traddr": "10.0.0.2", 00:20:27.177 "adrfam": "ipv4", 00:20:27.177 "trsvcid": "4420", 00:20:27.177 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.177 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:27.177 "hdgst": false, 00:20:27.177 "ddgst": false 00:20:27.177 }, 00:20:27.177 "method": "bdev_nvme_attach_controller" 00:20:27.177 }' 00:20:27.177 [2024-07-14 10:29:11.938374] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:20:27.177 [2024-07-14 10:29:11.938432] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2414904 ] 00:20:27.177 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.177 [2024-07-14 10:29:12.006999] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:27.177 [2024-07-14 10:29:12.048782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:27.177 [2024-07-14 10:29:12.048893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.177 [2024-07-14 10:29:12.048894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:27.435 I/O targets: 00:20:27.435 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:27.435 00:20:27.435 00:20:27.435 CUnit - A unit testing framework for C - Version 2.1-3 00:20:27.435 http://cunit.sourceforge.net/ 00:20:27.435 00:20:27.435 00:20:27.435 Suite: bdevio tests on: Nvme1n1 00:20:27.435 Test: blockdev write read block ...passed 00:20:27.435 Test: blockdev write zeroes read block ...passed 00:20:27.435 Test: blockdev write zeroes read no split ...passed 00:20:27.435 Test: blockdev write zeroes read split ...passed 00:20:27.435 Test: blockdev write zeroes read split partial ...passed 00:20:27.435 Test: blockdev reset ...[2024-07-14 10:29:12.395827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:27.435 [2024-07-14 10:29:12.395892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1009070 (9): Bad file descriptor 00:20:27.435 [2024-07-14 10:29:12.416243] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:27.435 passed 00:20:27.435 Test: blockdev write read 8 blocks ...passed 00:20:27.693 Test: blockdev write read size > 128k ...passed 00:20:27.693 Test: blockdev write read invalid size ...passed 00:20:27.693 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:27.693 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:27.693 Test: blockdev write read max offset ...passed 00:20:27.693 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:27.693 Test: blockdev writev readv 8 blocks ...passed 00:20:27.693 Test: blockdev writev readv 30 x 1block ...passed 00:20:27.693 Test: blockdev writev readv block ...passed 00:20:27.693 Test: blockdev writev readv size > 128k ...passed 00:20:27.693 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:27.693 Test: blockdev comparev and writev ...[2024-07-14 10:29:12.630024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:27.693 [2024-07-14 10:29:12.630053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:27.693 [2024-07-14 10:29:12.630066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:27.693 [2024-07-14 10:29:12.630075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.694 [2024-07-14 10:29:12.630330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:27.694 [2024-07-14 10:29:12.630342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:27.694 [2024-07-14 10:29:12.630353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:27.694 [2024-07-14 10:29:12.630360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:27.694 [2024-07-14 10:29:12.630602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:27.694 [2024-07-14 10:29:12.630612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:27.694 [2024-07-14 10:29:12.630630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:27.694 [2024-07-14 10:29:12.630638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:27.694 [2024-07-14 10:29:12.630881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:27.694 [2024-07-14 10:29:12.630892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:27.694 [2024-07-14 10:29:12.630904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:27.694 [2024-07-14 10:29:12.630912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:27.694 passed 00:20:27.953 Test: blockdev nvme passthru rw ...passed 00:20:27.953 Test: blockdev nvme passthru vendor specific ...[2024-07-14 10:29:12.712575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:27.953 [2024-07-14 10:29:12.712593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:27.953 [2024-07-14 10:29:12.712707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:27.953 [2024-07-14 10:29:12.712718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:27.953 [2024-07-14 10:29:12.712826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:27.953 [2024-07-14 10:29:12.712836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:27.953 [2024-07-14 10:29:12.712945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:27.953 [2024-07-14 10:29:12.712956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:27.953 passed 00:20:27.953 Test: blockdev nvme admin passthru ...passed 00:20:27.953 Test: blockdev copy ...passed 00:20:27.953 00:20:27.953 Run Summary: Type Total Ran Passed Failed Inactive 00:20:27.953 suites 1 1 n/a 0 0 00:20:27.953 tests 23 23 23 0 0 00:20:27.953 asserts 152 152 152 0 n/a 00:20:27.953 00:20:27.953 Elapsed time = 1.151 seconds 00:20:27.953 10:29:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:27.953 10:29:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.953 10:29:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:27.953 10:29:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.953 10:29:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:27.953 10:29:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:20:27.953 10:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:27.953 10:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:20:28.212 10:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:28.212 10:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:20:28.212 10:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:28.212 10:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:28.212 rmmod nvme_tcp 00:20:28.212 rmmod nvme_fabrics 00:20:28.212 rmmod nvme_keyring 00:20:28.212 10:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:28.212 10:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:20:28.212 10:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:20:28.212 10:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2414744 ']' 00:20:28.212 10:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2414744 00:20:28.212 10:29:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 2414744 ']' 00:20:28.212 10:29:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 2414744 00:20:28.212 10:29:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:20:28.212 10:29:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:28.212 10:29:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2414744 00:20:28.212 10:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:20:28.212 10:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:20:28.212 10:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2414744' 00:20:28.212 killing process with pid 2414744 00:20:28.212 10:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 2414744 00:20:28.212 10:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 2414744 00:20:28.470 10:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:28.470 10:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:28.470 10:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:28.470 10:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:28.470 10:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:28.470 10:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.470 10:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:28.470 10:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.372 10:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:30.372 00:20:30.372 real 0m10.233s 00:20:30.372 user 0m12.003s 00:20:30.372 sys 0m4.902s 00:20:30.373 10:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:30.373 10:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:30.373 ************************************ 00:20:30.373 END TEST nvmf_bdevio 00:20:30.373 ************************************ 00:20:30.373 10:29:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:30.373 10:29:15 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:30.373 10:29:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:30.373 10:29:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:30.373 10:29:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:30.632 ************************************ 00:20:30.632 START TEST nvmf_auth_target 00:20:30.632 ************************************ 00:20:30.632 10:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:30.632 * Looking for test storage... 00:20:30.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:30.632 10:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:30.632 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:30.632 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.632 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.632 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.632 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.632 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:30.632 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:30.632 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.632 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:30.632 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.632 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:30.632 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:30.632 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:30.632 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.632 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:30.632 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:30.632 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:30.632 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:30.632 10:29:15 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.632 10:29:15 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.632 10:29:15 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.632 10:29:15 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:20:30.633 10:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.222 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:37.222 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:20:37.222 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:37.222 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:37.222 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:37.222 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:37.222 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:37.222 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:20:37.222 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:37.222 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:20:37.222 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:20:37.222 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:20:37.222 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:20:37.222 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:20:37.222 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:20:37.222 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.222 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.222 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.222 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:37.223 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:37.223 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:37.223 Found net devices under 0000:86:00.0: cvl_0_0 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:37.223 Found net devices under 0000:86:00.1: cvl_0_1 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:37.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:37.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:20:37.223 00:20:37.223 --- 10.0.0.2 ping statistics --- 00:20:37.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.223 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:37.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:37.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:20:37.223 00:20:37.223 --- 10.0.0.1 ping statistics --- 00:20:37.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.223 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2418525 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2418525 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2418525 ']' 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2418545 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=218ac176f9ea47f0b27086b3f650b3d5a1e0f5685d37553a 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Adg 00:20:37.223 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 218ac176f9ea47f0b27086b3f650b3d5a1e0f5685d37553a 0 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 218ac176f9ea47f0b27086b3f650b3d5a1e0f5685d37553a 0 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=218ac176f9ea47f0b27086b3f650b3d5a1e0f5685d37553a 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Adg 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Adg 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.Adg 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8819c63bdb2b9097ebfcedba99e759e98ddb7474b53bec680b0436708658d608 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.OJ9 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8819c63bdb2b9097ebfcedba99e759e98ddb7474b53bec680b0436708658d608 3 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8819c63bdb2b9097ebfcedba99e759e98ddb7474b53bec680b0436708658d608 3 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8819c63bdb2b9097ebfcedba99e759e98ddb7474b53bec680b0436708658d608 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.OJ9 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.OJ9 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.OJ9 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b4b64c7b9040b8d51da46dedee832c05 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.sxG 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b4b64c7b9040b8d51da46dedee832c05 1 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b4b64c7b9040b8d51da46dedee832c05 1 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b4b64c7b9040b8d51da46dedee832c05 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.sxG 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.sxG 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.sxG 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3084d20689685342dd50aaa159cd8b6730766109c8c4e4f1 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.1I0 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3084d20689685342dd50aaa159cd8b6730766109c8c4e4f1 2 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3084d20689685342dd50aaa159cd8b6730766109c8c4e4f1 2 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3084d20689685342dd50aaa159cd8b6730766109c8c4e4f1 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.1I0 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.1I0 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.1I0 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3f2a3bb6d98890f8b66579aa3ff6d9953da57cf62cb7408d 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.0Kv 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3f2a3bb6d98890f8b66579aa3ff6d9953da57cf62cb7408d 2 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3f2a3bb6d98890f8b66579aa3ff6d9953da57cf62cb7408d 2 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3f2a3bb6d98890f8b66579aa3ff6d9953da57cf62cb7408d 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.0Kv 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.0Kv 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.0Kv 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cf13efe99122ae109bb4e870bf3a778f 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.RYD 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cf13efe99122ae109bb4e870bf3a778f 1 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cf13efe99122ae109bb4e870bf3a778f 1 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cf13efe99122ae109bb4e870bf3a778f 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.RYD 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.RYD 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.RYD 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fef907aefc0aa12b53ca5a687adc16cffc7f6f6894e412ee72477bbd8bb45cd3 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.EHg 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fef907aefc0aa12b53ca5a687adc16cffc7f6f6894e412ee72477bbd8bb45cd3 3 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fef907aefc0aa12b53ca5a687adc16cffc7f6f6894e412ee72477bbd8bb45cd3 3 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fef907aefc0aa12b53ca5a687adc16cffc7f6f6894e412ee72477bbd8bb45cd3 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:20:37.224 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:37.225 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.EHg 00:20:37.225 10:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.EHg 00:20:37.225 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.EHg 00:20:37.225 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:20:37.225 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2418525 00:20:37.225 10:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2418525 ']' 00:20:37.225 10:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.225 10:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:37.225 10:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.225 10:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:37.225 10:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.225 10:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:37.225 10:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:37.225 10:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2418545 /var/tmp/host.sock 00:20:37.225 10:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2418545 ']' 00:20:37.225 10:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:20:37.225 10:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:37.225 10:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:37.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:37.225 10:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:37.225 10:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.484 10:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:37.484 10:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:37.484 10:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:20:37.484 10:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.484 10:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.484 10:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.484 10:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:37.484 10:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Adg 00:20:37.484 10:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.484 10:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.484 10:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.484 10:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Adg 00:20:37.484 10:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Adg 00:20:37.743 10:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.OJ9 ]] 00:20:37.743 10:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.OJ9 00:20:37.743 10:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.743 10:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.743 10:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.743 10:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.OJ9 00:20:37.743 10:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.OJ9 00:20:38.002 10:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:38.002 10:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.sxG 00:20:38.002 10:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.003 10:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.003 10:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.003 10:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.sxG 00:20:38.003 10:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.sxG 00:20:38.003 10:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.1I0 ]] 00:20:38.003 10:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1I0 00:20:38.003 10:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.003 10:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.262 10:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.262 10:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1I0 00:20:38.262 10:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1I0 00:20:38.262 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:38.262 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.0Kv 00:20:38.262 10:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.262 10:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.262 10:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.262 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.0Kv 00:20:38.262 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.0Kv 00:20:38.521 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.RYD ]] 00:20:38.521 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RYD 00:20:38.521 10:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.521 10:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.521 10:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.521 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RYD 00:20:38.521 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RYD 00:20:38.780 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:38.780 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.EHg 00:20:38.780 10:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.780 10:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.780 10:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.780 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.EHg 00:20:38.780 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.EHg 00:20:39.070 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:20:39.070 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:39.070 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:39.070 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:39.070 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:39.070 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:39.070 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:20:39.070 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:39.070 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:39.070 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:39.070 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:39.070 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.070 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.070 10:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.070 10:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.070 10:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.070 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.070 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.363 00:20:39.363 10:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.363 10:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.363 10:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.621 10:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.621 10:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.621 10:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.621 10:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.621 10:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.621 10:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.621 { 00:20:39.621 "cntlid": 1, 00:20:39.621 "qid": 0, 00:20:39.621 "state": "enabled", 00:20:39.621 "thread": "nvmf_tgt_poll_group_000", 00:20:39.621 "listen_address": { 00:20:39.621 "trtype": "TCP", 00:20:39.621 "adrfam": "IPv4", 00:20:39.621 "traddr": "10.0.0.2", 00:20:39.621 "trsvcid": "4420" 00:20:39.621 }, 00:20:39.621 "peer_address": { 00:20:39.621 "trtype": "TCP", 00:20:39.621 "adrfam": "IPv4", 00:20:39.621 "traddr": "10.0.0.1", 00:20:39.621 "trsvcid": "35726" 00:20:39.621 }, 00:20:39.621 "auth": { 00:20:39.621 "state": "completed", 00:20:39.621 "digest": "sha256", 00:20:39.621 "dhgroup": "null" 00:20:39.621 } 00:20:39.621 } 00:20:39.621 ]' 00:20:39.621 10:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.621 10:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:39.621 10:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.621 10:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:39.621 10:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.621 10:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.621 10:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.621 10:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.880 10:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjE4YWMxNzZmOWVhNDdmMGIyNzA4NmIzZjY1MGIzZDVhMWUwZjU2ODVkMzc1NTNhiq/5aw==: --dhchap-ctrl-secret DHHC-1:03:ODgxOWM2M2JkYjJiOTA5N2ViZmNlZGJhOTllNzU5ZTk4ZGRiNzQ3NGI1M2JlYzY4MGIwNDM2NzA4NjU4ZDYwOLHYE6U=: 00:20:40.447 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.447 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:40.447 10:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.447 10:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.447 10:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.447 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.447 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:40.447 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:40.706 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:20:40.707 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.707 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:40.707 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:40.707 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:40.707 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.707 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.707 10:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.707 10:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.707 10:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.707 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.707 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.966 00:20:40.966 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.966 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.966 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.966 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.966 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.966 10:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.966 10:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.966 10:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.966 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.966 { 00:20:40.966 "cntlid": 3, 00:20:40.966 "qid": 0, 00:20:40.966 "state": "enabled", 00:20:40.966 "thread": "nvmf_tgt_poll_group_000", 00:20:40.966 "listen_address": { 00:20:40.966 "trtype": "TCP", 00:20:40.966 "adrfam": "IPv4", 00:20:40.966 "traddr": "10.0.0.2", 00:20:40.966 "trsvcid": "4420" 00:20:40.966 }, 00:20:40.966 "peer_address": { 00:20:40.966 "trtype": "TCP", 00:20:40.966 "adrfam": "IPv4", 00:20:40.966 "traddr": "10.0.0.1", 00:20:40.966 "trsvcid": "35758" 00:20:40.966 }, 00:20:40.966 "auth": { 00:20:40.966 "state": "completed", 00:20:40.966 "digest": "sha256", 00:20:40.966 "dhgroup": "null" 00:20:40.966 } 00:20:40.966 } 00:20:40.966 ]' 00:20:40.966 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.226 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:41.226 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.226 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:41.226 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.226 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.226 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.226 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.485 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjRiNjRjN2I5MDQwYjhkNTFkYTQ2ZGVkZWU4MzJjMDUr2Uou: --dhchap-ctrl-secret DHHC-1:02:MzA4NGQyMDY4OTY4NTM0MmRkNTBhYWExNTljZDhiNjczMDc2NjEwOWM4YzRlNGYxv2n4bA==: 00:20:42.054 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.054 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:42.054 10:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.054 10:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.054 10:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.054 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.054 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:42.054 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:42.054 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:20:42.054 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:42.054 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:42.054 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:42.054 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:42.054 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.054 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.054 10:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.054 10:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.054 10:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.054 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.054 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.313 00:20:42.313 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.313 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.313 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.573 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.573 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.573 10:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.573 10:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.573 10:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.573 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:42.573 { 00:20:42.573 "cntlid": 5, 00:20:42.573 "qid": 0, 00:20:42.573 "state": "enabled", 00:20:42.573 "thread": "nvmf_tgt_poll_group_000", 00:20:42.573 "listen_address": { 00:20:42.573 "trtype": "TCP", 00:20:42.573 "adrfam": "IPv4", 00:20:42.573 "traddr": "10.0.0.2", 00:20:42.573 "trsvcid": "4420" 00:20:42.573 }, 00:20:42.573 "peer_address": { 00:20:42.573 "trtype": "TCP", 00:20:42.573 "adrfam": "IPv4", 00:20:42.573 "traddr": "10.0.0.1", 00:20:42.573 "trsvcid": "33878" 00:20:42.573 }, 00:20:42.573 "auth": { 00:20:42.573 "state": "completed", 00:20:42.573 "digest": "sha256", 00:20:42.573 "dhgroup": "null" 00:20:42.573 } 00:20:42.573 } 00:20:42.573 ]' 00:20:42.573 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.573 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:42.573 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:42.573 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:42.573 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:42.855 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.855 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.855 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.855 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyYTNiYjZkOTg4OTBmOGI2NjU3OWFhM2ZmNmQ5OTUzZGE1N2NmNjJjYjc0MDhk96Cs+Q==: --dhchap-ctrl-secret DHHC-1:01:Y2YxM2VmZTk5MTIyYWUxMDliYjRlODcwYmYzYTc3OGb2bxF+: 00:20:43.423 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.423 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:43.423 10:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.423 10:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.423 10:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.423 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.423 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:43.423 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:43.682 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:20:43.683 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.683 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:43.683 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:43.683 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:43.683 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.683 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:43.683 10:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.683 10:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.683 10:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.683 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.683 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.941 00:20:43.941 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.941 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.941 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:44.200 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.200 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.200 10:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.200 10:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.200 10:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.200 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.200 { 00:20:44.200 "cntlid": 7, 00:20:44.200 "qid": 0, 00:20:44.200 "state": "enabled", 00:20:44.200 "thread": "nvmf_tgt_poll_group_000", 00:20:44.200 "listen_address": { 00:20:44.200 "trtype": "TCP", 00:20:44.200 "adrfam": "IPv4", 00:20:44.200 "traddr": "10.0.0.2", 00:20:44.200 "trsvcid": "4420" 00:20:44.200 }, 00:20:44.200 "peer_address": { 00:20:44.200 "trtype": "TCP", 00:20:44.200 "adrfam": "IPv4", 00:20:44.200 "traddr": "10.0.0.1", 00:20:44.200 "trsvcid": "33906" 00:20:44.200 }, 00:20:44.200 "auth": { 00:20:44.200 "state": "completed", 00:20:44.200 "digest": "sha256", 00:20:44.200 "dhgroup": "null" 00:20:44.200 } 00:20:44.200 } 00:20:44.200 ]' 00:20:44.200 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.200 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:44.200 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.201 10:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:44.201 10:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.201 10:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.201 10:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.201 10:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.459 10:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmVmOTA3YWVmYzBhYTEyYjUzY2E1YTY4N2FkYzE2Y2ZmYzdmNmY2ODk0ZTQxMmVlNzI0NzdiYmQ4YmI0NWNkM2xcN3c=: 00:20:45.028 10:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.028 10:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:45.028 10:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.028 10:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.028 10:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.028 10:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.028 10:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:45.028 10:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:45.028 10:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:45.286 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:20:45.286 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:45.286 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:45.286 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:45.286 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:45.286 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.286 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.286 10:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.286 10:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.286 10:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.286 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.286 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.286 00:20:45.546 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.546 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.546 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.546 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.546 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.546 10:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.546 10:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.546 10:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.546 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.546 { 00:20:45.546 "cntlid": 9, 00:20:45.546 "qid": 0, 00:20:45.546 "state": "enabled", 00:20:45.546 "thread": "nvmf_tgt_poll_group_000", 00:20:45.546 "listen_address": { 00:20:45.546 "trtype": "TCP", 00:20:45.546 "adrfam": "IPv4", 00:20:45.546 "traddr": "10.0.0.2", 00:20:45.546 "trsvcid": "4420" 00:20:45.546 }, 00:20:45.546 "peer_address": { 00:20:45.546 "trtype": "TCP", 00:20:45.546 "adrfam": "IPv4", 00:20:45.546 "traddr": "10.0.0.1", 00:20:45.546 "trsvcid": "33926" 00:20:45.546 }, 00:20:45.546 "auth": { 00:20:45.546 "state": "completed", 00:20:45.546 "digest": "sha256", 00:20:45.546 "dhgroup": "ffdhe2048" 00:20:45.546 } 00:20:45.546 } 00:20:45.546 ]' 00:20:45.546 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.546 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:45.546 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.805 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:45.805 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.805 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.805 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.805 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.064 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjE4YWMxNzZmOWVhNDdmMGIyNzA4NmIzZjY1MGIzZDVhMWUwZjU2ODVkMzc1NTNhiq/5aw==: --dhchap-ctrl-secret DHHC-1:03:ODgxOWM2M2JkYjJiOTA5N2ViZmNlZGJhOTllNzU5ZTk4ZGRiNzQ3NGI1M2JlYzY4MGIwNDM2NzA4NjU4ZDYwOLHYE6U=: 00:20:46.632 10:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.632 10:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:46.632 10:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.632 10:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.632 10:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.632 10:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.632 10:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:46.632 10:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:46.632 10:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:20:46.632 10:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.632 10:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:46.632 10:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:46.632 10:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:46.632 10:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.632 10:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.632 10:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.632 10:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.632 10:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.632 10:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.632 10:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.891 00:20:46.891 10:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.891 10:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.891 10:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.149 10:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.149 10:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.149 10:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.149 10:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.149 10:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.149 10:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.149 { 00:20:47.149 "cntlid": 11, 00:20:47.149 "qid": 0, 00:20:47.149 "state": "enabled", 00:20:47.149 "thread": "nvmf_tgt_poll_group_000", 00:20:47.149 "listen_address": { 00:20:47.149 "trtype": "TCP", 00:20:47.149 "adrfam": "IPv4", 00:20:47.149 "traddr": "10.0.0.2", 00:20:47.149 "trsvcid": "4420" 00:20:47.149 }, 00:20:47.149 "peer_address": { 00:20:47.149 "trtype": "TCP", 00:20:47.149 "adrfam": "IPv4", 00:20:47.149 "traddr": "10.0.0.1", 00:20:47.149 "trsvcid": "33952" 00:20:47.149 }, 00:20:47.149 "auth": { 00:20:47.149 "state": "completed", 00:20:47.149 "digest": "sha256", 00:20:47.149 "dhgroup": "ffdhe2048" 00:20:47.149 } 00:20:47.149 } 00:20:47.149 ]' 00:20:47.149 10:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.149 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:47.149 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.149 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:47.149 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.149 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.149 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.149 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.408 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjRiNjRjN2I5MDQwYjhkNTFkYTQ2ZGVkZWU4MzJjMDUr2Uou: --dhchap-ctrl-secret DHHC-1:02:MzA4NGQyMDY4OTY4NTM0MmRkNTBhYWExNTljZDhiNjczMDc2NjEwOWM4YzRlNGYxv2n4bA==: 00:20:47.974 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.974 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:47.974 10:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.974 10:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.974 10:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.974 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.974 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:47.974 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:48.233 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:20:48.233 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.233 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:48.233 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:48.233 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:48.233 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.233 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.233 10:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.233 10:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.233 10:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.233 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.233 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.491 00:20:48.491 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.491 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.491 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.491 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.491 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.491 10:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.491 10:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.749 10:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.749 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.749 { 00:20:48.749 "cntlid": 13, 00:20:48.749 "qid": 0, 00:20:48.749 "state": "enabled", 00:20:48.749 "thread": "nvmf_tgt_poll_group_000", 00:20:48.749 "listen_address": { 00:20:48.749 "trtype": "TCP", 00:20:48.749 "adrfam": "IPv4", 00:20:48.749 "traddr": "10.0.0.2", 00:20:48.749 "trsvcid": "4420" 00:20:48.749 }, 00:20:48.749 "peer_address": { 00:20:48.749 "trtype": "TCP", 00:20:48.749 "adrfam": "IPv4", 00:20:48.749 "traddr": "10.0.0.1", 00:20:48.749 "trsvcid": "33980" 00:20:48.749 }, 00:20:48.749 "auth": { 00:20:48.749 "state": "completed", 00:20:48.749 "digest": "sha256", 00:20:48.749 "dhgroup": "ffdhe2048" 00:20:48.749 } 00:20:48.749 } 00:20:48.749 ]' 00:20:48.749 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.749 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:48.749 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.749 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:48.749 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:48.749 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.749 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.749 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.008 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyYTNiYjZkOTg4OTBmOGI2NjU3OWFhM2ZmNmQ5OTUzZGE1N2NmNjJjYjc0MDhk96Cs+Q==: --dhchap-ctrl-secret DHHC-1:01:Y2YxM2VmZTk5MTIyYWUxMDliYjRlODcwYmYzYTc3OGb2bxF+: 00:20:49.574 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.574 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:49.574 10:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.574 10:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.574 10:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.574 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.574 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:49.574 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:49.833 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:20:49.833 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.833 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:49.833 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:49.833 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:49.833 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.833 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:49.833 10:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.833 10:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.833 10:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.833 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:49.833 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:49.833 00:20:50.091 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:50.091 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.091 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.091 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.092 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.092 10:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.092 10:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.092 10:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.092 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.092 { 00:20:50.092 "cntlid": 15, 00:20:50.092 "qid": 0, 00:20:50.092 "state": "enabled", 00:20:50.092 "thread": "nvmf_tgt_poll_group_000", 00:20:50.092 "listen_address": { 00:20:50.092 "trtype": "TCP", 00:20:50.092 "adrfam": "IPv4", 00:20:50.092 "traddr": "10.0.0.2", 00:20:50.092 "trsvcid": "4420" 00:20:50.092 }, 00:20:50.092 "peer_address": { 00:20:50.092 "trtype": "TCP", 00:20:50.092 "adrfam": "IPv4", 00:20:50.092 "traddr": "10.0.0.1", 00:20:50.092 "trsvcid": "34006" 00:20:50.092 }, 00:20:50.092 "auth": { 00:20:50.092 "state": "completed", 00:20:50.092 "digest": "sha256", 00:20:50.092 "dhgroup": "ffdhe2048" 00:20:50.092 } 00:20:50.092 } 00:20:50.092 ]' 00:20:50.092 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.092 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:50.092 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.350 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:50.350 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.350 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.350 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.350 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.350 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmVmOTA3YWVmYzBhYTEyYjUzY2E1YTY4N2FkYzE2Y2ZmYzdmNmY2ODk0ZTQxMmVlNzI0NzdiYmQ4YmI0NWNkM2xcN3c=: 00:20:50.917 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.917 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:50.917 10:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.917 10:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.917 10:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.917 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:50.917 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.917 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:50.917 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:51.176 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:20:51.176 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.176 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:51.176 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:51.176 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:51.176 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.176 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.176 10:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.176 10:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.176 10:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.176 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.176 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.435 00:20:51.435 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.435 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.435 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.693 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.693 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.693 10:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.693 10:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.693 10:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.693 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.693 { 00:20:51.693 "cntlid": 17, 00:20:51.693 "qid": 0, 00:20:51.693 "state": "enabled", 00:20:51.693 "thread": "nvmf_tgt_poll_group_000", 00:20:51.693 "listen_address": { 00:20:51.693 "trtype": "TCP", 00:20:51.693 "adrfam": "IPv4", 00:20:51.693 "traddr": "10.0.0.2", 00:20:51.693 "trsvcid": "4420" 00:20:51.693 }, 00:20:51.693 "peer_address": { 00:20:51.693 "trtype": "TCP", 00:20:51.693 "adrfam": "IPv4", 00:20:51.693 "traddr": "10.0.0.1", 00:20:51.693 "trsvcid": "34036" 00:20:51.693 }, 00:20:51.694 "auth": { 00:20:51.694 "state": "completed", 00:20:51.694 "digest": "sha256", 00:20:51.694 "dhgroup": "ffdhe3072" 00:20:51.694 } 00:20:51.694 } 00:20:51.694 ]' 00:20:51.694 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.694 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:51.694 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.694 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:51.694 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.694 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.694 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.694 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.952 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjE4YWMxNzZmOWVhNDdmMGIyNzA4NmIzZjY1MGIzZDVhMWUwZjU2ODVkMzc1NTNhiq/5aw==: --dhchap-ctrl-secret DHHC-1:03:ODgxOWM2M2JkYjJiOTA5N2ViZmNlZGJhOTllNzU5ZTk4ZGRiNzQ3NGI1M2JlYzY4MGIwNDM2NzA4NjU4ZDYwOLHYE6U=: 00:20:52.519 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.519 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:52.519 10:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.519 10:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.519 10:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.519 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:52.519 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:52.519 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:52.778 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:20:52.778 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:52.778 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:52.778 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:52.778 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:52.778 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.778 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.778 10:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.778 10:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.778 10:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.778 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.778 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.037 00:20:53.037 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:53.037 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:53.037 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.334 10:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.334 10:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.334 10:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.334 10:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.334 10:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.334 10:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:53.334 { 00:20:53.334 "cntlid": 19, 00:20:53.334 "qid": 0, 00:20:53.334 "state": "enabled", 00:20:53.334 "thread": "nvmf_tgt_poll_group_000", 00:20:53.334 "listen_address": { 00:20:53.335 "trtype": "TCP", 00:20:53.335 "adrfam": "IPv4", 00:20:53.335 "traddr": "10.0.0.2", 00:20:53.335 "trsvcid": "4420" 00:20:53.335 }, 00:20:53.335 "peer_address": { 00:20:53.335 "trtype": "TCP", 00:20:53.335 "adrfam": "IPv4", 00:20:53.335 "traddr": "10.0.0.1", 00:20:53.335 "trsvcid": "46108" 00:20:53.335 }, 00:20:53.335 "auth": { 00:20:53.335 "state": "completed", 00:20:53.335 "digest": "sha256", 00:20:53.335 "dhgroup": "ffdhe3072" 00:20:53.335 } 00:20:53.335 } 00:20:53.335 ]' 00:20:53.335 10:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:53.335 10:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:53.335 10:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:53.335 10:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:53.335 10:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.335 10:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.335 10:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.335 10:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.624 10:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjRiNjRjN2I5MDQwYjhkNTFkYTQ2ZGVkZWU4MzJjMDUr2Uou: --dhchap-ctrl-secret DHHC-1:02:MzA4NGQyMDY4OTY4NTM0MmRkNTBhYWExNTljZDhiNjczMDc2NjEwOWM4YzRlNGYxv2n4bA==: 00:20:54.193 10:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.193 10:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:54.193 10:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.193 10:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.193 10:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.193 10:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:54.193 10:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:54.193 10:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:54.193 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:20:54.193 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:54.193 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:54.193 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:54.193 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:54.193 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.193 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.193 10:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.193 10:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.452 10:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.452 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.452 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.452 00:20:54.452 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.452 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.452 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.711 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.711 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.711 10:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.711 10:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.711 10:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.711 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:54.711 { 00:20:54.711 "cntlid": 21, 00:20:54.711 "qid": 0, 00:20:54.711 "state": "enabled", 00:20:54.711 "thread": "nvmf_tgt_poll_group_000", 00:20:54.711 "listen_address": { 00:20:54.711 "trtype": "TCP", 00:20:54.711 "adrfam": "IPv4", 00:20:54.711 "traddr": "10.0.0.2", 00:20:54.711 "trsvcid": "4420" 00:20:54.711 }, 00:20:54.711 "peer_address": { 00:20:54.711 "trtype": "TCP", 00:20:54.711 "adrfam": "IPv4", 00:20:54.711 "traddr": "10.0.0.1", 00:20:54.711 "trsvcid": "46144" 00:20:54.711 }, 00:20:54.711 "auth": { 00:20:54.711 "state": "completed", 00:20:54.711 "digest": "sha256", 00:20:54.711 "dhgroup": "ffdhe3072" 00:20:54.711 } 00:20:54.711 } 00:20:54.711 ]' 00:20:54.711 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:54.711 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:54.711 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:54.971 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:54.971 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.971 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.971 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.971 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.971 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyYTNiYjZkOTg4OTBmOGI2NjU3OWFhM2ZmNmQ5OTUzZGE1N2NmNjJjYjc0MDhk96Cs+Q==: --dhchap-ctrl-secret DHHC-1:01:Y2YxM2VmZTk5MTIyYWUxMDliYjRlODcwYmYzYTc3OGb2bxF+: 00:20:55.539 10:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.798 10:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:55.798 10:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.798 10:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.798 10:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.798 10:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:55.798 10:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:55.798 10:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:55.798 10:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:20:55.798 10:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.798 10:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:55.798 10:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:55.798 10:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:55.798 10:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.798 10:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:55.798 10:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.798 10:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.798 10:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.798 10:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:55.798 10:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:56.057 00:20:56.057 10:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.057 10:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.057 10:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.317 10:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.317 10:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.317 10:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.317 10:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.317 10:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.317 10:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:56.317 { 00:20:56.317 "cntlid": 23, 00:20:56.317 "qid": 0, 00:20:56.317 "state": "enabled", 00:20:56.317 "thread": "nvmf_tgt_poll_group_000", 00:20:56.317 "listen_address": { 00:20:56.317 "trtype": "TCP", 00:20:56.317 "adrfam": "IPv4", 00:20:56.317 "traddr": "10.0.0.2", 00:20:56.317 "trsvcid": "4420" 00:20:56.317 }, 00:20:56.317 "peer_address": { 00:20:56.317 "trtype": "TCP", 00:20:56.317 "adrfam": "IPv4", 00:20:56.317 "traddr": "10.0.0.1", 00:20:56.317 "trsvcid": "46182" 00:20:56.317 }, 00:20:56.317 "auth": { 00:20:56.317 "state": "completed", 00:20:56.317 "digest": "sha256", 00:20:56.317 "dhgroup": "ffdhe3072" 00:20:56.317 } 00:20:56.317 } 00:20:56.317 ]' 00:20:56.317 10:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:56.317 10:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:56.317 10:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:56.317 10:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:56.317 10:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.317 10:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.317 10:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.317 10:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.576 10:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmVmOTA3YWVmYzBhYTEyYjUzY2E1YTY4N2FkYzE2Y2ZmYzdmNmY2ODk0ZTQxMmVlNzI0NzdiYmQ4YmI0NWNkM2xcN3c=: 00:20:57.145 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.145 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:57.145 10:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.145 10:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.145 10:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.145 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:57.145 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:57.145 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:57.145 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:57.404 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:20:57.404 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:57.404 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:57.404 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:57.404 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:57.404 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.404 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.404 10:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.404 10:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.404 10:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.404 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.404 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.663 00:20:57.663 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:57.663 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:57.663 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.922 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.922 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.922 10:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.922 10:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.922 10:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.922 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:57.922 { 00:20:57.922 "cntlid": 25, 00:20:57.922 "qid": 0, 00:20:57.922 "state": "enabled", 00:20:57.922 "thread": "nvmf_tgt_poll_group_000", 00:20:57.922 "listen_address": { 00:20:57.922 "trtype": "TCP", 00:20:57.922 "adrfam": "IPv4", 00:20:57.922 "traddr": "10.0.0.2", 00:20:57.922 "trsvcid": "4420" 00:20:57.922 }, 00:20:57.922 "peer_address": { 00:20:57.922 "trtype": "TCP", 00:20:57.922 "adrfam": "IPv4", 00:20:57.922 "traddr": "10.0.0.1", 00:20:57.922 "trsvcid": "46204" 00:20:57.922 }, 00:20:57.922 "auth": { 00:20:57.922 "state": "completed", 00:20:57.922 "digest": "sha256", 00:20:57.922 "dhgroup": "ffdhe4096" 00:20:57.922 } 00:20:57.922 } 00:20:57.922 ]' 00:20:57.922 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.922 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:57.922 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.922 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:57.922 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.922 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.922 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.922 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.182 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjE4YWMxNzZmOWVhNDdmMGIyNzA4NmIzZjY1MGIzZDVhMWUwZjU2ODVkMzc1NTNhiq/5aw==: --dhchap-ctrl-secret DHHC-1:03:ODgxOWM2M2JkYjJiOTA5N2ViZmNlZGJhOTllNzU5ZTk4ZGRiNzQ3NGI1M2JlYzY4MGIwNDM2NzA4NjU4ZDYwOLHYE6U=: 00:20:58.749 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.749 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:58.749 10:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.749 10:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.749 10:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.749 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:58.749 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:58.749 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:59.007 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:20:59.007 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:59.007 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:59.007 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:59.007 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:59.007 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.007 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.007 10:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.007 10:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.007 10:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.007 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.007 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.266 00:20:59.266 10:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:59.266 10:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.266 10:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:59.266 10:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.266 10:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.266 10:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.266 10:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.266 10:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.266 10:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:59.266 { 00:20:59.266 "cntlid": 27, 00:20:59.266 "qid": 0, 00:20:59.266 "state": "enabled", 00:20:59.266 "thread": "nvmf_tgt_poll_group_000", 00:20:59.266 "listen_address": { 00:20:59.266 "trtype": "TCP", 00:20:59.266 "adrfam": "IPv4", 00:20:59.266 "traddr": "10.0.0.2", 00:20:59.266 "trsvcid": "4420" 00:20:59.266 }, 00:20:59.266 "peer_address": { 00:20:59.266 "trtype": "TCP", 00:20:59.266 "adrfam": "IPv4", 00:20:59.266 "traddr": "10.0.0.1", 00:20:59.266 "trsvcid": "46232" 00:20:59.266 }, 00:20:59.266 "auth": { 00:20:59.266 "state": "completed", 00:20:59.266 "digest": "sha256", 00:20:59.266 "dhgroup": "ffdhe4096" 00:20:59.266 } 00:20:59.266 } 00:20:59.266 ]' 00:20:59.266 10:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:59.524 10:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:59.524 10:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:59.524 10:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:59.524 10:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:59.524 10:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.524 10:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.524 10:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.781 10:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjRiNjRjN2I5MDQwYjhkNTFkYTQ2ZGVkZWU4MzJjMDUr2Uou: --dhchap-ctrl-secret DHHC-1:02:MzA4NGQyMDY4OTY4NTM0MmRkNTBhYWExNTljZDhiNjczMDc2NjEwOWM4YzRlNGYxv2n4bA==: 00:21:00.348 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.348 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:00.348 10:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.348 10:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.348 10:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.348 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:00.348 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:00.348 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:00.348 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:21:00.348 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:00.348 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:00.348 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:00.348 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:00.348 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.348 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.348 10:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.348 10:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.348 10:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.348 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.348 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.606 00:21:00.864 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.864 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.864 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.864 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.864 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.864 10:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.864 10:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.864 10:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.864 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.864 { 00:21:00.864 "cntlid": 29, 00:21:00.864 "qid": 0, 00:21:00.864 "state": "enabled", 00:21:00.864 "thread": "nvmf_tgt_poll_group_000", 00:21:00.864 "listen_address": { 00:21:00.864 "trtype": "TCP", 00:21:00.864 "adrfam": "IPv4", 00:21:00.864 "traddr": "10.0.0.2", 00:21:00.865 "trsvcid": "4420" 00:21:00.865 }, 00:21:00.865 "peer_address": { 00:21:00.865 "trtype": "TCP", 00:21:00.865 "adrfam": "IPv4", 00:21:00.865 "traddr": "10.0.0.1", 00:21:00.865 "trsvcid": "46258" 00:21:00.865 }, 00:21:00.865 "auth": { 00:21:00.865 "state": "completed", 00:21:00.865 "digest": "sha256", 00:21:00.865 "dhgroup": "ffdhe4096" 00:21:00.865 } 00:21:00.865 } 00:21:00.865 ]' 00:21:00.865 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.865 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:00.865 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:01.123 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:01.123 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:01.123 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.123 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.123 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.123 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyYTNiYjZkOTg4OTBmOGI2NjU3OWFhM2ZmNmQ5OTUzZGE1N2NmNjJjYjc0MDhk96Cs+Q==: --dhchap-ctrl-secret DHHC-1:01:Y2YxM2VmZTk5MTIyYWUxMDliYjRlODcwYmYzYTc3OGb2bxF+: 00:21:01.718 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.977 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:01.977 10:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.977 10:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.977 10:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.977 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:01.977 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:01.977 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:01.977 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:21:01.977 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:01.977 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:01.977 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:01.977 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:01.977 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.977 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:01.977 10:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.977 10:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.977 10:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.977 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:01.977 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:02.236 00:21:02.236 10:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.236 10:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.236 10:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.494 10:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.494 10:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.494 10:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.494 10:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.494 10:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.494 10:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.494 { 00:21:02.494 "cntlid": 31, 00:21:02.494 "qid": 0, 00:21:02.494 "state": "enabled", 00:21:02.494 "thread": "nvmf_tgt_poll_group_000", 00:21:02.494 "listen_address": { 00:21:02.494 "trtype": "TCP", 00:21:02.494 "adrfam": "IPv4", 00:21:02.494 "traddr": "10.0.0.2", 00:21:02.494 "trsvcid": "4420" 00:21:02.494 }, 00:21:02.494 "peer_address": { 00:21:02.495 "trtype": "TCP", 00:21:02.495 "adrfam": "IPv4", 00:21:02.495 "traddr": "10.0.0.1", 00:21:02.495 "trsvcid": "50286" 00:21:02.495 }, 00:21:02.495 "auth": { 00:21:02.495 "state": "completed", 00:21:02.495 "digest": "sha256", 00:21:02.495 "dhgroup": "ffdhe4096" 00:21:02.495 } 00:21:02.495 } 00:21:02.495 ]' 00:21:02.495 10:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.495 10:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:02.495 10:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.495 10:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:02.495 10:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.753 10:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.753 10:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.753 10:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.753 10:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmVmOTA3YWVmYzBhYTEyYjUzY2E1YTY4N2FkYzE2Y2ZmYzdmNmY2ODk0ZTQxMmVlNzI0NzdiYmQ4YmI0NWNkM2xcN3c=: 00:21:03.321 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.321 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:03.321 10:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.321 10:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.321 10:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.321 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:03.321 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:03.321 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:03.321 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:03.581 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:21:03.581 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:03.581 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:03.581 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:03.581 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:03.581 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.581 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.581 10:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.581 10:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.581 10:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.581 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.581 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.839 00:21:03.839 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:03.839 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:03.839 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.098 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.098 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.098 10:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.098 10:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.098 10:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.098 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.098 { 00:21:04.098 "cntlid": 33, 00:21:04.098 "qid": 0, 00:21:04.098 "state": "enabled", 00:21:04.098 "thread": "nvmf_tgt_poll_group_000", 00:21:04.098 "listen_address": { 00:21:04.098 "trtype": "TCP", 00:21:04.098 "adrfam": "IPv4", 00:21:04.098 "traddr": "10.0.0.2", 00:21:04.098 "trsvcid": "4420" 00:21:04.098 }, 00:21:04.098 "peer_address": { 00:21:04.098 "trtype": "TCP", 00:21:04.098 "adrfam": "IPv4", 00:21:04.098 "traddr": "10.0.0.1", 00:21:04.098 "trsvcid": "50310" 00:21:04.098 }, 00:21:04.098 "auth": { 00:21:04.098 "state": "completed", 00:21:04.098 "digest": "sha256", 00:21:04.098 "dhgroup": "ffdhe6144" 00:21:04.098 } 00:21:04.098 } 00:21:04.098 ]' 00:21:04.098 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:04.098 10:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:04.098 10:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:04.098 10:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:04.098 10:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:04.357 10:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.357 10:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.357 10:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.357 10:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjE4YWMxNzZmOWVhNDdmMGIyNzA4NmIzZjY1MGIzZDVhMWUwZjU2ODVkMzc1NTNhiq/5aw==: --dhchap-ctrl-secret DHHC-1:03:ODgxOWM2M2JkYjJiOTA5N2ViZmNlZGJhOTllNzU5ZTk4ZGRiNzQ3NGI1M2JlYzY4MGIwNDM2NzA4NjU4ZDYwOLHYE6U=: 00:21:04.925 10:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.925 10:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:04.925 10:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.925 10:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.925 10:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.925 10:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.925 10:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:04.925 10:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:05.184 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:21:05.184 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:05.184 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:05.184 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:05.184 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:05.184 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.184 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.184 10:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.184 10:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.184 10:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.184 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.184 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.443 00:21:05.443 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:05.443 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:05.443 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.702 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.702 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.702 10:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.702 10:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.702 10:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.702 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:05.702 { 00:21:05.702 "cntlid": 35, 00:21:05.702 "qid": 0, 00:21:05.702 "state": "enabled", 00:21:05.702 "thread": "nvmf_tgt_poll_group_000", 00:21:05.702 "listen_address": { 00:21:05.702 "trtype": "TCP", 00:21:05.702 "adrfam": "IPv4", 00:21:05.702 "traddr": "10.0.0.2", 00:21:05.702 "trsvcid": "4420" 00:21:05.702 }, 00:21:05.702 "peer_address": { 00:21:05.702 "trtype": "TCP", 00:21:05.702 "adrfam": "IPv4", 00:21:05.702 "traddr": "10.0.0.1", 00:21:05.702 "trsvcid": "50324" 00:21:05.702 }, 00:21:05.702 "auth": { 00:21:05.702 "state": "completed", 00:21:05.702 "digest": "sha256", 00:21:05.702 "dhgroup": "ffdhe6144" 00:21:05.702 } 00:21:05.702 } 00:21:05.702 ]' 00:21:05.702 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.702 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:05.702 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.702 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:05.962 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.962 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.962 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.962 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.962 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjRiNjRjN2I5MDQwYjhkNTFkYTQ2ZGVkZWU4MzJjMDUr2Uou: --dhchap-ctrl-secret DHHC-1:02:MzA4NGQyMDY4OTY4NTM0MmRkNTBhYWExNTljZDhiNjczMDc2NjEwOWM4YzRlNGYxv2n4bA==: 00:21:06.541 10:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.541 10:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:06.541 10:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.541 10:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.541 10:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.541 10:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.541 10:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:06.541 10:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:06.802 10:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:21:06.802 10:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.802 10:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:06.802 10:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:06.802 10:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:06.802 10:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.802 10:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.802 10:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.802 10:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.802 10:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.802 10:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.802 10:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.060 00:21:07.060 10:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:07.060 10:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:07.060 10:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.318 10:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.318 10:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.318 10:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.318 10:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.318 10:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.318 10:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:07.318 { 00:21:07.318 "cntlid": 37, 00:21:07.318 "qid": 0, 00:21:07.318 "state": "enabled", 00:21:07.318 "thread": "nvmf_tgt_poll_group_000", 00:21:07.318 "listen_address": { 00:21:07.318 "trtype": "TCP", 00:21:07.318 "adrfam": "IPv4", 00:21:07.318 "traddr": "10.0.0.2", 00:21:07.318 "trsvcid": "4420" 00:21:07.318 }, 00:21:07.318 "peer_address": { 00:21:07.318 "trtype": "TCP", 00:21:07.318 "adrfam": "IPv4", 00:21:07.318 "traddr": "10.0.0.1", 00:21:07.318 "trsvcid": "50348" 00:21:07.318 }, 00:21:07.318 "auth": { 00:21:07.318 "state": "completed", 00:21:07.318 "digest": "sha256", 00:21:07.318 "dhgroup": "ffdhe6144" 00:21:07.318 } 00:21:07.318 } 00:21:07.318 ]' 00:21:07.318 10:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:07.318 10:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:07.318 10:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:07.318 10:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:07.318 10:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.607 10:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.607 10:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.607 10:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.607 10:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyYTNiYjZkOTg4OTBmOGI2NjU3OWFhM2ZmNmQ5OTUzZGE1N2NmNjJjYjc0MDhk96Cs+Q==: --dhchap-ctrl-secret DHHC-1:01:Y2YxM2VmZTk5MTIyYWUxMDliYjRlODcwYmYzYTc3OGb2bxF+: 00:21:08.175 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.175 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:08.175 10:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.175 10:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.175 10:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.175 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.175 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:08.175 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:08.435 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:21:08.435 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.435 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:08.435 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:08.435 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:08.435 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.435 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:08.435 10:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.435 10:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.435 10:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.435 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:08.435 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:08.693 00:21:08.693 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:08.693 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:08.693 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.952 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.952 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.952 10:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.952 10:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.952 10:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.952 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:08.952 { 00:21:08.952 "cntlid": 39, 00:21:08.952 "qid": 0, 00:21:08.952 "state": "enabled", 00:21:08.952 "thread": "nvmf_tgt_poll_group_000", 00:21:08.952 "listen_address": { 00:21:08.952 "trtype": "TCP", 00:21:08.952 "adrfam": "IPv4", 00:21:08.952 "traddr": "10.0.0.2", 00:21:08.952 "trsvcid": "4420" 00:21:08.952 }, 00:21:08.952 "peer_address": { 00:21:08.952 "trtype": "TCP", 00:21:08.952 "adrfam": "IPv4", 00:21:08.952 "traddr": "10.0.0.1", 00:21:08.952 "trsvcid": "50364" 00:21:08.952 }, 00:21:08.952 "auth": { 00:21:08.952 "state": "completed", 00:21:08.952 "digest": "sha256", 00:21:08.952 "dhgroup": "ffdhe6144" 00:21:08.952 } 00:21:08.952 } 00:21:08.952 ]' 00:21:08.952 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:08.952 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:08.952 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:08.952 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:08.952 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:08.952 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.952 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.952 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.211 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmVmOTA3YWVmYzBhYTEyYjUzY2E1YTY4N2FkYzE2Y2ZmYzdmNmY2ODk0ZTQxMmVlNzI0NzdiYmQ4YmI0NWNkM2xcN3c=: 00:21:09.779 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.779 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:09.779 10:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.779 10:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.779 10:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.779 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:09.779 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:09.779 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:09.779 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:10.038 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:21:10.038 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:10.038 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:10.038 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:10.038 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:10.038 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.038 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.038 10:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.038 10:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.038 10:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.038 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.038 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.605 00:21:10.605 10:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:10.605 10:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.605 10:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:10.605 10:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.605 10:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.605 10:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.605 10:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.605 10:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.605 10:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:10.605 { 00:21:10.606 "cntlid": 41, 00:21:10.606 "qid": 0, 00:21:10.606 "state": "enabled", 00:21:10.606 "thread": "nvmf_tgt_poll_group_000", 00:21:10.606 "listen_address": { 00:21:10.606 "trtype": "TCP", 00:21:10.606 "adrfam": "IPv4", 00:21:10.606 "traddr": "10.0.0.2", 00:21:10.606 "trsvcid": "4420" 00:21:10.606 }, 00:21:10.606 "peer_address": { 00:21:10.606 "trtype": "TCP", 00:21:10.606 "adrfam": "IPv4", 00:21:10.606 "traddr": "10.0.0.1", 00:21:10.606 "trsvcid": "50398" 00:21:10.606 }, 00:21:10.606 "auth": { 00:21:10.606 "state": "completed", 00:21:10.606 "digest": "sha256", 00:21:10.606 "dhgroup": "ffdhe8192" 00:21:10.606 } 00:21:10.606 } 00:21:10.606 ]' 00:21:10.606 10:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:10.864 10:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:10.864 10:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:10.864 10:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:10.864 10:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:10.864 10:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.864 10:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.864 10:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.121 10:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjE4YWMxNzZmOWVhNDdmMGIyNzA4NmIzZjY1MGIzZDVhMWUwZjU2ODVkMzc1NTNhiq/5aw==: --dhchap-ctrl-secret DHHC-1:03:ODgxOWM2M2JkYjJiOTA5N2ViZmNlZGJhOTllNzU5ZTk4ZGRiNzQ3NGI1M2JlYzY4MGIwNDM2NzA4NjU4ZDYwOLHYE6U=: 00:21:11.688 10:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.688 10:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:11.688 10:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.688 10:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.688 10:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.688 10:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:11.688 10:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:11.688 10:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:11.688 10:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:21:11.688 10:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.688 10:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:11.688 10:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:11.688 10:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:11.688 10:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.688 10:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.688 10:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.688 10:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.688 10:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.688 10:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.688 10:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.254 00:21:12.254 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:12.254 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:12.254 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.513 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.513 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.513 10:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.513 10:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.513 10:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.513 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:12.513 { 00:21:12.513 "cntlid": 43, 00:21:12.513 "qid": 0, 00:21:12.513 "state": "enabled", 00:21:12.513 "thread": "nvmf_tgt_poll_group_000", 00:21:12.513 "listen_address": { 00:21:12.513 "trtype": "TCP", 00:21:12.513 "adrfam": "IPv4", 00:21:12.513 "traddr": "10.0.0.2", 00:21:12.513 "trsvcid": "4420" 00:21:12.513 }, 00:21:12.513 "peer_address": { 00:21:12.513 "trtype": "TCP", 00:21:12.513 "adrfam": "IPv4", 00:21:12.513 "traddr": "10.0.0.1", 00:21:12.513 "trsvcid": "47454" 00:21:12.513 }, 00:21:12.513 "auth": { 00:21:12.513 "state": "completed", 00:21:12.513 "digest": "sha256", 00:21:12.513 "dhgroup": "ffdhe8192" 00:21:12.513 } 00:21:12.513 } 00:21:12.513 ]' 00:21:12.513 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:12.513 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:12.513 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:12.513 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:12.513 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:12.513 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.513 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.513 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.772 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjRiNjRjN2I5MDQwYjhkNTFkYTQ2ZGVkZWU4MzJjMDUr2Uou: --dhchap-ctrl-secret DHHC-1:02:MzA4NGQyMDY4OTY4NTM0MmRkNTBhYWExNTljZDhiNjczMDc2NjEwOWM4YzRlNGYxv2n4bA==: 00:21:13.340 10:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.340 10:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:13.340 10:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.340 10:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.340 10:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.340 10:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:13.340 10:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:13.340 10:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:13.599 10:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:21:13.599 10:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.599 10:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:13.599 10:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:13.599 10:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:13.599 10:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.599 10:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.599 10:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.599 10:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.599 10:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.599 10:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.599 10:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.857 00:21:13.857 10:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.857 10:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.857 10:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.116 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.116 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.116 10:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.116 10:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.116 10:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.116 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:14.116 { 00:21:14.116 "cntlid": 45, 00:21:14.116 "qid": 0, 00:21:14.116 "state": "enabled", 00:21:14.116 "thread": "nvmf_tgt_poll_group_000", 00:21:14.116 "listen_address": { 00:21:14.116 "trtype": "TCP", 00:21:14.116 "adrfam": "IPv4", 00:21:14.116 "traddr": "10.0.0.2", 00:21:14.116 "trsvcid": "4420" 00:21:14.116 }, 00:21:14.116 "peer_address": { 00:21:14.116 "trtype": "TCP", 00:21:14.116 "adrfam": "IPv4", 00:21:14.116 "traddr": "10.0.0.1", 00:21:14.116 "trsvcid": "47478" 00:21:14.116 }, 00:21:14.116 "auth": { 00:21:14.116 "state": "completed", 00:21:14.116 "digest": "sha256", 00:21:14.116 "dhgroup": "ffdhe8192" 00:21:14.116 } 00:21:14.116 } 00:21:14.116 ]' 00:21:14.116 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:14.116 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:14.116 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:14.376 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:14.376 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:14.376 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.376 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.376 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.376 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyYTNiYjZkOTg4OTBmOGI2NjU3OWFhM2ZmNmQ5OTUzZGE1N2NmNjJjYjc0MDhk96Cs+Q==: --dhchap-ctrl-secret DHHC-1:01:Y2YxM2VmZTk5MTIyYWUxMDliYjRlODcwYmYzYTc3OGb2bxF+: 00:21:14.944 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.944 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:14.944 10:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.944 10:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.203 10:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.203 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.203 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:15.203 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:15.203 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:21:15.203 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.203 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:15.203 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:15.203 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:15.203 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.204 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:15.204 10:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.204 10:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.204 10:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.204 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:15.204 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:15.772 00:21:15.772 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:15.772 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:15.772 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.031 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.031 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.031 10:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.031 10:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.031 10:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.031 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:16.031 { 00:21:16.031 "cntlid": 47, 00:21:16.031 "qid": 0, 00:21:16.031 "state": "enabled", 00:21:16.031 "thread": "nvmf_tgt_poll_group_000", 00:21:16.031 "listen_address": { 00:21:16.031 "trtype": "TCP", 00:21:16.031 "adrfam": "IPv4", 00:21:16.031 "traddr": "10.0.0.2", 00:21:16.031 "trsvcid": "4420" 00:21:16.031 }, 00:21:16.031 "peer_address": { 00:21:16.031 "trtype": "TCP", 00:21:16.031 "adrfam": "IPv4", 00:21:16.031 "traddr": "10.0.0.1", 00:21:16.031 "trsvcid": "47486" 00:21:16.031 }, 00:21:16.031 "auth": { 00:21:16.031 "state": "completed", 00:21:16.031 "digest": "sha256", 00:21:16.031 "dhgroup": "ffdhe8192" 00:21:16.031 } 00:21:16.031 } 00:21:16.031 ]' 00:21:16.031 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:16.031 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:16.031 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:16.031 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:16.031 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.031 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.031 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.031 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.290 10:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmVmOTA3YWVmYzBhYTEyYjUzY2E1YTY4N2FkYzE2Y2ZmYzdmNmY2ODk0ZTQxMmVlNzI0NzdiYmQ4YmI0NWNkM2xcN3c=: 00:21:16.859 10:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.859 10:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:16.859 10:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.859 10:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.859 10:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.859 10:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:16.859 10:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:16.859 10:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:16.859 10:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:16.859 10:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:17.118 10:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:21:17.118 10:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.118 10:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:17.118 10:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:17.118 10:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:17.118 10:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.118 10:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.119 10:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.119 10:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.119 10:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.119 10:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.119 10:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.376 00:21:17.376 10:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:17.376 10:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:17.376 10:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.376 10:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.376 10:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.376 10:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.376 10:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.376 10:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.376 10:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:17.376 { 00:21:17.376 "cntlid": 49, 00:21:17.376 "qid": 0, 00:21:17.376 "state": "enabled", 00:21:17.376 "thread": "nvmf_tgt_poll_group_000", 00:21:17.376 "listen_address": { 00:21:17.376 "trtype": "TCP", 00:21:17.376 "adrfam": "IPv4", 00:21:17.376 "traddr": "10.0.0.2", 00:21:17.376 "trsvcid": "4420" 00:21:17.376 }, 00:21:17.376 "peer_address": { 00:21:17.376 "trtype": "TCP", 00:21:17.376 "adrfam": "IPv4", 00:21:17.376 "traddr": "10.0.0.1", 00:21:17.376 "trsvcid": "47506" 00:21:17.376 }, 00:21:17.376 "auth": { 00:21:17.376 "state": "completed", 00:21:17.376 "digest": "sha384", 00:21:17.376 "dhgroup": "null" 00:21:17.376 } 00:21:17.376 } 00:21:17.376 ]' 00:21:17.376 10:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:17.634 10:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:17.634 10:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:17.634 10:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:17.634 10:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:17.634 10:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.634 10:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.634 10:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.893 10:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjE4YWMxNzZmOWVhNDdmMGIyNzA4NmIzZjY1MGIzZDVhMWUwZjU2ODVkMzc1NTNhiq/5aw==: --dhchap-ctrl-secret DHHC-1:03:ODgxOWM2M2JkYjJiOTA5N2ViZmNlZGJhOTllNzU5ZTk4ZGRiNzQ3NGI1M2JlYzY4MGIwNDM2NzA4NjU4ZDYwOLHYE6U=: 00:21:18.459 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.459 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:18.459 10:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.459 10:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.459 10:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.459 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:18.459 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:18.459 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:18.459 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:21:18.459 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:18.459 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:18.459 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:18.459 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:18.459 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.459 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.459 10:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.459 10:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.459 10:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.459 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.459 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.716 00:21:18.716 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:18.716 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:18.716 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.975 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.975 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.975 10:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.975 10:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.975 10:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.975 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.975 { 00:21:18.975 "cntlid": 51, 00:21:18.975 "qid": 0, 00:21:18.975 "state": "enabled", 00:21:18.975 "thread": "nvmf_tgt_poll_group_000", 00:21:18.975 "listen_address": { 00:21:18.975 "trtype": "TCP", 00:21:18.975 "adrfam": "IPv4", 00:21:18.975 "traddr": "10.0.0.2", 00:21:18.975 "trsvcid": "4420" 00:21:18.975 }, 00:21:18.975 "peer_address": { 00:21:18.975 "trtype": "TCP", 00:21:18.975 "adrfam": "IPv4", 00:21:18.975 "traddr": "10.0.0.1", 00:21:18.975 "trsvcid": "47544" 00:21:18.975 }, 00:21:18.975 "auth": { 00:21:18.975 "state": "completed", 00:21:18.975 "digest": "sha384", 00:21:18.975 "dhgroup": "null" 00:21:18.975 } 00:21:18.975 } 00:21:18.975 ]' 00:21:18.975 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.975 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:18.975 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.975 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:18.975 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:19.234 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.234 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.234 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.234 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjRiNjRjN2I5MDQwYjhkNTFkYTQ2ZGVkZWU4MzJjMDUr2Uou: --dhchap-ctrl-secret DHHC-1:02:MzA4NGQyMDY4OTY4NTM0MmRkNTBhYWExNTljZDhiNjczMDc2NjEwOWM4YzRlNGYxv2n4bA==: 00:21:19.805 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.805 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:19.805 10:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.805 10:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.805 10:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.805 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.805 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:19.805 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:20.064 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:21:20.064 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:20.064 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:20.064 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:20.064 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:20.064 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.064 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.064 10:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.064 10:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.064 10:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.064 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.064 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.323 00:21:20.323 10:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:20.323 10:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:20.323 10:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.582 10:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.582 10:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.582 10:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.582 10:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.582 10:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.582 10:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:20.582 { 00:21:20.582 "cntlid": 53, 00:21:20.582 "qid": 0, 00:21:20.582 "state": "enabled", 00:21:20.582 "thread": "nvmf_tgt_poll_group_000", 00:21:20.582 "listen_address": { 00:21:20.582 "trtype": "TCP", 00:21:20.582 "adrfam": "IPv4", 00:21:20.582 "traddr": "10.0.0.2", 00:21:20.582 "trsvcid": "4420" 00:21:20.582 }, 00:21:20.582 "peer_address": { 00:21:20.582 "trtype": "TCP", 00:21:20.582 "adrfam": "IPv4", 00:21:20.582 "traddr": "10.0.0.1", 00:21:20.582 "trsvcid": "47572" 00:21:20.582 }, 00:21:20.582 "auth": { 00:21:20.582 "state": "completed", 00:21:20.582 "digest": "sha384", 00:21:20.582 "dhgroup": "null" 00:21:20.582 } 00:21:20.582 } 00:21:20.582 ]' 00:21:20.582 10:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:20.582 10:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:20.582 10:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:20.582 10:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:20.582 10:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:20.582 10:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.582 10:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.582 10:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.841 10:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyYTNiYjZkOTg4OTBmOGI2NjU3OWFhM2ZmNmQ5OTUzZGE1N2NmNjJjYjc0MDhk96Cs+Q==: --dhchap-ctrl-secret DHHC-1:01:Y2YxM2VmZTk5MTIyYWUxMDliYjRlODcwYmYzYTc3OGb2bxF+: 00:21:21.409 10:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.409 10:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:21.409 10:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.409 10:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.409 10:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.409 10:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:21.409 10:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:21.409 10:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:21.668 10:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:21:21.668 10:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:21.668 10:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:21.668 10:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:21.668 10:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:21.668 10:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.668 10:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:21.668 10:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.668 10:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.668 10:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.668 10:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:21.668 10:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:21.938 00:21:21.938 10:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:21.938 10:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:21.938 10:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.938 10:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.938 10:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.938 10:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.938 10:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.938 10:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.220 10:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.220 { 00:21:22.220 "cntlid": 55, 00:21:22.220 "qid": 0, 00:21:22.220 "state": "enabled", 00:21:22.220 "thread": "nvmf_tgt_poll_group_000", 00:21:22.220 "listen_address": { 00:21:22.220 "trtype": "TCP", 00:21:22.220 "adrfam": "IPv4", 00:21:22.220 "traddr": "10.0.0.2", 00:21:22.220 "trsvcid": "4420" 00:21:22.220 }, 00:21:22.220 "peer_address": { 00:21:22.220 "trtype": "TCP", 00:21:22.220 "adrfam": "IPv4", 00:21:22.220 "traddr": "10.0.0.1", 00:21:22.220 "trsvcid": "47612" 00:21:22.220 }, 00:21:22.220 "auth": { 00:21:22.220 "state": "completed", 00:21:22.220 "digest": "sha384", 00:21:22.220 "dhgroup": "null" 00:21:22.220 } 00:21:22.220 } 00:21:22.220 ]' 00:21:22.220 10:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:22.220 10:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:22.220 10:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:22.220 10:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:22.220 10:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:22.220 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.220 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.220 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.220 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmVmOTA3YWVmYzBhYTEyYjUzY2E1YTY4N2FkYzE2Y2ZmYzdmNmY2ODk0ZTQxMmVlNzI0NzdiYmQ4YmI0NWNkM2xcN3c=: 00:21:22.787 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.787 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:22.787 10:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.787 10:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.787 10:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.787 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:22.787 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:22.787 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:22.787 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:23.047 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:21:23.047 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:23.047 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:23.047 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:23.047 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:23.047 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.047 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.047 10:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.047 10:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.047 10:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.047 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.047 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.305 00:21:23.305 10:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:23.305 10:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:23.305 10:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.564 10:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.564 10:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.564 10:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.564 10:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.564 10:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.564 10:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:23.564 { 00:21:23.564 "cntlid": 57, 00:21:23.564 "qid": 0, 00:21:23.564 "state": "enabled", 00:21:23.564 "thread": "nvmf_tgt_poll_group_000", 00:21:23.564 "listen_address": { 00:21:23.564 "trtype": "TCP", 00:21:23.564 "adrfam": "IPv4", 00:21:23.564 "traddr": "10.0.0.2", 00:21:23.564 "trsvcid": "4420" 00:21:23.564 }, 00:21:23.564 "peer_address": { 00:21:23.564 "trtype": "TCP", 00:21:23.564 "adrfam": "IPv4", 00:21:23.564 "traddr": "10.0.0.1", 00:21:23.564 "trsvcid": "43568" 00:21:23.564 }, 00:21:23.564 "auth": { 00:21:23.564 "state": "completed", 00:21:23.564 "digest": "sha384", 00:21:23.564 "dhgroup": "ffdhe2048" 00:21:23.564 } 00:21:23.564 } 00:21:23.564 ]' 00:21:23.564 10:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:23.564 10:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:23.564 10:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:23.564 10:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:23.564 10:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:23.564 10:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.564 10:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.564 10:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.823 10:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjE4YWMxNzZmOWVhNDdmMGIyNzA4NmIzZjY1MGIzZDVhMWUwZjU2ODVkMzc1NTNhiq/5aw==: --dhchap-ctrl-secret DHHC-1:03:ODgxOWM2M2JkYjJiOTA5N2ViZmNlZGJhOTllNzU5ZTk4ZGRiNzQ3NGI1M2JlYzY4MGIwNDM2NzA4NjU4ZDYwOLHYE6U=: 00:21:24.390 10:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.390 10:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:24.390 10:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.390 10:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.390 10:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.390 10:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:24.390 10:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:24.390 10:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:24.649 10:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:21:24.649 10:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:24.649 10:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:24.649 10:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:24.649 10:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:24.649 10:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.649 10:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.649 10:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.649 10:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.649 10:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.649 10:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.649 10:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.908 00:21:24.908 10:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.908 10:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.908 10:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.167 10:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.167 10:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.167 10:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.167 10:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.167 10:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.167 10:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:25.167 { 00:21:25.168 "cntlid": 59, 00:21:25.168 "qid": 0, 00:21:25.168 "state": "enabled", 00:21:25.168 "thread": "nvmf_tgt_poll_group_000", 00:21:25.168 "listen_address": { 00:21:25.168 "trtype": "TCP", 00:21:25.168 "adrfam": "IPv4", 00:21:25.168 "traddr": "10.0.0.2", 00:21:25.168 "trsvcid": "4420" 00:21:25.168 }, 00:21:25.168 "peer_address": { 00:21:25.168 "trtype": "TCP", 00:21:25.168 "adrfam": "IPv4", 00:21:25.168 "traddr": "10.0.0.1", 00:21:25.168 "trsvcid": "43596" 00:21:25.168 }, 00:21:25.168 "auth": { 00:21:25.168 "state": "completed", 00:21:25.168 "digest": "sha384", 00:21:25.168 "dhgroup": "ffdhe2048" 00:21:25.168 } 00:21:25.168 } 00:21:25.168 ]' 00:21:25.168 10:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:25.168 10:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:25.168 10:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:25.168 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:25.168 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:25.168 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.168 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.168 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.427 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjRiNjRjN2I5MDQwYjhkNTFkYTQ2ZGVkZWU4MzJjMDUr2Uou: --dhchap-ctrl-secret DHHC-1:02:MzA4NGQyMDY4OTY4NTM0MmRkNTBhYWExNTljZDhiNjczMDc2NjEwOWM4YzRlNGYxv2n4bA==: 00:21:25.994 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.994 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:25.994 10:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.994 10:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.994 10:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.994 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:25.994 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:25.994 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:26.253 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:21:26.253 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:26.253 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:26.253 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:26.253 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:26.253 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.253 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.253 10:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.253 10:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.253 10:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.253 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.253 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.253 00:21:26.514 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.514 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.514 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.514 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.514 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.515 10:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.515 10:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.515 10:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.515 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.515 { 00:21:26.515 "cntlid": 61, 00:21:26.515 "qid": 0, 00:21:26.515 "state": "enabled", 00:21:26.515 "thread": "nvmf_tgt_poll_group_000", 00:21:26.515 "listen_address": { 00:21:26.515 "trtype": "TCP", 00:21:26.515 "adrfam": "IPv4", 00:21:26.515 "traddr": "10.0.0.2", 00:21:26.515 "trsvcid": "4420" 00:21:26.515 }, 00:21:26.515 "peer_address": { 00:21:26.515 "trtype": "TCP", 00:21:26.515 "adrfam": "IPv4", 00:21:26.515 "traddr": "10.0.0.1", 00:21:26.515 "trsvcid": "43626" 00:21:26.515 }, 00:21:26.515 "auth": { 00:21:26.515 "state": "completed", 00:21:26.515 "digest": "sha384", 00:21:26.515 "dhgroup": "ffdhe2048" 00:21:26.515 } 00:21:26.515 } 00:21:26.515 ]' 00:21:26.515 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.515 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:26.515 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:26.775 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:26.775 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:26.775 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.775 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.775 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.775 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyYTNiYjZkOTg4OTBmOGI2NjU3OWFhM2ZmNmQ5OTUzZGE1N2NmNjJjYjc0MDhk96Cs+Q==: --dhchap-ctrl-secret DHHC-1:01:Y2YxM2VmZTk5MTIyYWUxMDliYjRlODcwYmYzYTc3OGb2bxF+: 00:21:27.344 10:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.344 10:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:27.344 10:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.344 10:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.344 10:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.344 10:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:27.344 10:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:27.344 10:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:27.603 10:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:21:27.603 10:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:27.603 10:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:27.603 10:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:27.603 10:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:27.603 10:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.603 10:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:27.603 10:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.603 10:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.603 10:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.603 10:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:27.603 10:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:27.862 00:21:27.862 10:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:27.862 10:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:27.862 10:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.121 10:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.122 10:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.122 10:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.122 10:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.122 10:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.122 10:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:28.122 { 00:21:28.122 "cntlid": 63, 00:21:28.122 "qid": 0, 00:21:28.122 "state": "enabled", 00:21:28.122 "thread": "nvmf_tgt_poll_group_000", 00:21:28.122 "listen_address": { 00:21:28.122 "trtype": "TCP", 00:21:28.122 "adrfam": "IPv4", 00:21:28.122 "traddr": "10.0.0.2", 00:21:28.122 "trsvcid": "4420" 00:21:28.122 }, 00:21:28.122 "peer_address": { 00:21:28.122 "trtype": "TCP", 00:21:28.122 "adrfam": "IPv4", 00:21:28.122 "traddr": "10.0.0.1", 00:21:28.122 "trsvcid": "43652" 00:21:28.122 }, 00:21:28.122 "auth": { 00:21:28.122 "state": "completed", 00:21:28.122 "digest": "sha384", 00:21:28.122 "dhgroup": "ffdhe2048" 00:21:28.122 } 00:21:28.122 } 00:21:28.122 ]' 00:21:28.122 10:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:28.122 10:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:28.122 10:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:28.122 10:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:28.122 10:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:28.122 10:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.122 10:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.122 10:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.385 10:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmVmOTA3YWVmYzBhYTEyYjUzY2E1YTY4N2FkYzE2Y2ZmYzdmNmY2ODk0ZTQxMmVlNzI0NzdiYmQ4YmI0NWNkM2xcN3c=: 00:21:28.952 10:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.952 10:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:28.952 10:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.952 10:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.952 10:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.952 10:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.952 10:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.952 10:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:28.952 10:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:29.211 10:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:21:29.211 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:29.211 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:29.211 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:29.211 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:29.211 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.211 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.211 10:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.211 10:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.211 10:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.211 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.211 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.469 00:21:29.469 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.469 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.469 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.469 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.469 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.469 10:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.469 10:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.728 10:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.728 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.728 { 00:21:29.728 "cntlid": 65, 00:21:29.728 "qid": 0, 00:21:29.728 "state": "enabled", 00:21:29.728 "thread": "nvmf_tgt_poll_group_000", 00:21:29.728 "listen_address": { 00:21:29.728 "trtype": "TCP", 00:21:29.728 "adrfam": "IPv4", 00:21:29.728 "traddr": "10.0.0.2", 00:21:29.728 "trsvcid": "4420" 00:21:29.728 }, 00:21:29.728 "peer_address": { 00:21:29.728 "trtype": "TCP", 00:21:29.728 "adrfam": "IPv4", 00:21:29.728 "traddr": "10.0.0.1", 00:21:29.728 "trsvcid": "43676" 00:21:29.728 }, 00:21:29.728 "auth": { 00:21:29.728 "state": "completed", 00:21:29.728 "digest": "sha384", 00:21:29.728 "dhgroup": "ffdhe3072" 00:21:29.728 } 00:21:29.728 } 00:21:29.728 ]' 00:21:29.728 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.728 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:29.728 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.728 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:29.728 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.728 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.728 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.728 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.986 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjE4YWMxNzZmOWVhNDdmMGIyNzA4NmIzZjY1MGIzZDVhMWUwZjU2ODVkMzc1NTNhiq/5aw==: --dhchap-ctrl-secret DHHC-1:03:ODgxOWM2M2JkYjJiOTA5N2ViZmNlZGJhOTllNzU5ZTk4ZGRiNzQ3NGI1M2JlYzY4MGIwNDM2NzA4NjU4ZDYwOLHYE6U=: 00:21:30.554 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.554 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:30.554 10:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.554 10:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.554 10:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.554 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.554 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:30.554 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:30.554 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:21:30.554 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.554 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:30.554 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:30.554 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:30.554 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.554 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.554 10:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.554 10:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.554 10:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.554 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.554 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.814 00:21:30.814 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:30.814 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:30.814 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.072 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.072 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.073 10:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.073 10:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.073 10:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.073 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.073 { 00:21:31.073 "cntlid": 67, 00:21:31.073 "qid": 0, 00:21:31.073 "state": "enabled", 00:21:31.073 "thread": "nvmf_tgt_poll_group_000", 00:21:31.073 "listen_address": { 00:21:31.073 "trtype": "TCP", 00:21:31.073 "adrfam": "IPv4", 00:21:31.073 "traddr": "10.0.0.2", 00:21:31.073 "trsvcid": "4420" 00:21:31.073 }, 00:21:31.073 "peer_address": { 00:21:31.073 "trtype": "TCP", 00:21:31.073 "adrfam": "IPv4", 00:21:31.073 "traddr": "10.0.0.1", 00:21:31.073 "trsvcid": "43694" 00:21:31.073 }, 00:21:31.073 "auth": { 00:21:31.073 "state": "completed", 00:21:31.073 "digest": "sha384", 00:21:31.073 "dhgroup": "ffdhe3072" 00:21:31.073 } 00:21:31.073 } 00:21:31.073 ]' 00:21:31.073 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.073 10:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:31.073 10:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.332 10:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:31.332 10:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:31.332 10:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.332 10:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.332 10:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.332 10:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjRiNjRjN2I5MDQwYjhkNTFkYTQ2ZGVkZWU4MzJjMDUr2Uou: --dhchap-ctrl-secret DHHC-1:02:MzA4NGQyMDY4OTY4NTM0MmRkNTBhYWExNTljZDhiNjczMDc2NjEwOWM4YzRlNGYxv2n4bA==: 00:21:31.900 10:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.900 10:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:31.900 10:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.900 10:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.900 10:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.900 10:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:31.900 10:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:31.900 10:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:32.159 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:21:32.159 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:32.159 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:32.159 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:32.159 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:32.159 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.159 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.159 10:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.159 10:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.159 10:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.159 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.160 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.418 00:21:32.418 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:32.418 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:32.418 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.676 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.676 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.676 10:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.676 10:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.676 10:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.676 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:32.676 { 00:21:32.676 "cntlid": 69, 00:21:32.676 "qid": 0, 00:21:32.676 "state": "enabled", 00:21:32.676 "thread": "nvmf_tgt_poll_group_000", 00:21:32.676 "listen_address": { 00:21:32.676 "trtype": "TCP", 00:21:32.676 "adrfam": "IPv4", 00:21:32.676 "traddr": "10.0.0.2", 00:21:32.676 "trsvcid": "4420" 00:21:32.676 }, 00:21:32.676 "peer_address": { 00:21:32.676 "trtype": "TCP", 00:21:32.676 "adrfam": "IPv4", 00:21:32.676 "traddr": "10.0.0.1", 00:21:32.676 "trsvcid": "57834" 00:21:32.676 }, 00:21:32.676 "auth": { 00:21:32.676 "state": "completed", 00:21:32.676 "digest": "sha384", 00:21:32.676 "dhgroup": "ffdhe3072" 00:21:32.676 } 00:21:32.676 } 00:21:32.676 ]' 00:21:32.676 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:32.676 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:32.676 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:32.676 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:32.676 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:32.676 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.676 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.676 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.934 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyYTNiYjZkOTg4OTBmOGI2NjU3OWFhM2ZmNmQ5OTUzZGE1N2NmNjJjYjc0MDhk96Cs+Q==: --dhchap-ctrl-secret DHHC-1:01:Y2YxM2VmZTk5MTIyYWUxMDliYjRlODcwYmYzYTc3OGb2bxF+: 00:21:33.502 10:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.502 10:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:33.502 10:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.502 10:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.502 10:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.502 10:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.502 10:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:33.502 10:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:33.761 10:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:21:33.761 10:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.761 10:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:33.761 10:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:33.761 10:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:33.761 10:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.761 10:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:33.761 10:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.761 10:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.761 10:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.761 10:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:33.761 10:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.020 00:21:34.020 10:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:34.020 10:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:34.020 10:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.278 10:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.278 10:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.278 10:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.278 10:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.278 10:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.278 10:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.278 { 00:21:34.278 "cntlid": 71, 00:21:34.278 "qid": 0, 00:21:34.278 "state": "enabled", 00:21:34.278 "thread": "nvmf_tgt_poll_group_000", 00:21:34.278 "listen_address": { 00:21:34.278 "trtype": "TCP", 00:21:34.278 "adrfam": "IPv4", 00:21:34.278 "traddr": "10.0.0.2", 00:21:34.278 "trsvcid": "4420" 00:21:34.278 }, 00:21:34.278 "peer_address": { 00:21:34.278 "trtype": "TCP", 00:21:34.278 "adrfam": "IPv4", 00:21:34.278 "traddr": "10.0.0.1", 00:21:34.278 "trsvcid": "57852" 00:21:34.278 }, 00:21:34.278 "auth": { 00:21:34.278 "state": "completed", 00:21:34.278 "digest": "sha384", 00:21:34.278 "dhgroup": "ffdhe3072" 00:21:34.278 } 00:21:34.278 } 00:21:34.278 ]' 00:21:34.278 10:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.278 10:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:34.278 10:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:34.278 10:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:34.278 10:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:34.278 10:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.278 10:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.278 10:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.536 10:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmVmOTA3YWVmYzBhYTEyYjUzY2E1YTY4N2FkYzE2Y2ZmYzdmNmY2ODk0ZTQxMmVlNzI0NzdiYmQ4YmI0NWNkM2xcN3c=: 00:21:35.104 10:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.104 10:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:35.104 10:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.104 10:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.104 10:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.104 10:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:35.104 10:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:35.104 10:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:35.104 10:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:35.362 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:21:35.362 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:35.362 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:35.362 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:35.362 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:35.362 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.362 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.362 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.362 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.362 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.362 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.362 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.621 00:21:35.621 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:35.621 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:35.621 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.621 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.621 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.621 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.621 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.621 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.621 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:35.621 { 00:21:35.621 "cntlid": 73, 00:21:35.621 "qid": 0, 00:21:35.621 "state": "enabled", 00:21:35.621 "thread": "nvmf_tgt_poll_group_000", 00:21:35.621 "listen_address": { 00:21:35.621 "trtype": "TCP", 00:21:35.621 "adrfam": "IPv4", 00:21:35.621 "traddr": "10.0.0.2", 00:21:35.621 "trsvcid": "4420" 00:21:35.621 }, 00:21:35.621 "peer_address": { 00:21:35.621 "trtype": "TCP", 00:21:35.621 "adrfam": "IPv4", 00:21:35.621 "traddr": "10.0.0.1", 00:21:35.621 "trsvcid": "57868" 00:21:35.621 }, 00:21:35.621 "auth": { 00:21:35.621 "state": "completed", 00:21:35.621 "digest": "sha384", 00:21:35.621 "dhgroup": "ffdhe4096" 00:21:35.621 } 00:21:35.621 } 00:21:35.621 ]' 00:21:35.621 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:35.880 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:35.880 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:35.880 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:35.880 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:35.880 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.880 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.880 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.148 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjE4YWMxNzZmOWVhNDdmMGIyNzA4NmIzZjY1MGIzZDVhMWUwZjU2ODVkMzc1NTNhiq/5aw==: --dhchap-ctrl-secret DHHC-1:03:ODgxOWM2M2JkYjJiOTA5N2ViZmNlZGJhOTllNzU5ZTk4ZGRiNzQ3NGI1M2JlYzY4MGIwNDM2NzA4NjU4ZDYwOLHYE6U=: 00:21:36.454 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.713 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:36.713 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.713 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.713 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.713 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:36.713 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:36.713 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:36.713 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:21:36.713 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:36.713 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:36.713 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:36.713 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:36.713 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.713 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.713 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.713 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.713 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.713 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.713 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.974 00:21:36.974 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:36.974 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:36.974 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.233 10:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.233 10:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.233 10:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.233 10:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.233 10:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.233 10:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:37.233 { 00:21:37.233 "cntlid": 75, 00:21:37.233 "qid": 0, 00:21:37.233 "state": "enabled", 00:21:37.233 "thread": "nvmf_tgt_poll_group_000", 00:21:37.233 "listen_address": { 00:21:37.233 "trtype": "TCP", 00:21:37.233 "adrfam": "IPv4", 00:21:37.233 "traddr": "10.0.0.2", 00:21:37.233 "trsvcid": "4420" 00:21:37.233 }, 00:21:37.233 "peer_address": { 00:21:37.233 "trtype": "TCP", 00:21:37.233 "adrfam": "IPv4", 00:21:37.233 "traddr": "10.0.0.1", 00:21:37.233 "trsvcid": "57898" 00:21:37.233 }, 00:21:37.233 "auth": { 00:21:37.233 "state": "completed", 00:21:37.233 "digest": "sha384", 00:21:37.233 "dhgroup": "ffdhe4096" 00:21:37.233 } 00:21:37.233 } 00:21:37.233 ]' 00:21:37.233 10:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:37.233 10:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:37.233 10:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:37.233 10:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:37.233 10:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:37.494 10:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.494 10:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.494 10:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.494 10:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjRiNjRjN2I5MDQwYjhkNTFkYTQ2ZGVkZWU4MzJjMDUr2Uou: --dhchap-ctrl-secret DHHC-1:02:MzA4NGQyMDY4OTY4NTM0MmRkNTBhYWExNTljZDhiNjczMDc2NjEwOWM4YzRlNGYxv2n4bA==: 00:21:38.061 10:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.061 10:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:38.061 10:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.061 10:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.061 10:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.061 10:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:38.061 10:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:38.061 10:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:38.321 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:21:38.321 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:38.321 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:38.321 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:38.321 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:38.321 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.321 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.321 10:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.321 10:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.321 10:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.321 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.321 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.579 00:21:38.579 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:38.579 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:38.579 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.838 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.838 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.838 10:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.838 10:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.838 10:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.838 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:38.838 { 00:21:38.838 "cntlid": 77, 00:21:38.838 "qid": 0, 00:21:38.838 "state": "enabled", 00:21:38.838 "thread": "nvmf_tgt_poll_group_000", 00:21:38.838 "listen_address": { 00:21:38.838 "trtype": "TCP", 00:21:38.838 "adrfam": "IPv4", 00:21:38.838 "traddr": "10.0.0.2", 00:21:38.838 "trsvcid": "4420" 00:21:38.838 }, 00:21:38.838 "peer_address": { 00:21:38.838 "trtype": "TCP", 00:21:38.838 "adrfam": "IPv4", 00:21:38.838 "traddr": "10.0.0.1", 00:21:38.838 "trsvcid": "57936" 00:21:38.838 }, 00:21:38.838 "auth": { 00:21:38.838 "state": "completed", 00:21:38.838 "digest": "sha384", 00:21:38.838 "dhgroup": "ffdhe4096" 00:21:38.838 } 00:21:38.838 } 00:21:38.838 ]' 00:21:38.838 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:38.838 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:38.838 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:38.838 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:38.838 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:38.838 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.838 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.838 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.096 10:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyYTNiYjZkOTg4OTBmOGI2NjU3OWFhM2ZmNmQ5OTUzZGE1N2NmNjJjYjc0MDhk96Cs+Q==: --dhchap-ctrl-secret DHHC-1:01:Y2YxM2VmZTk5MTIyYWUxMDliYjRlODcwYmYzYTc3OGb2bxF+: 00:21:39.662 10:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.662 10:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:39.662 10:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.662 10:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.662 10:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.662 10:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:39.662 10:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:39.662 10:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:39.922 10:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:21:39.922 10:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:39.922 10:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:39.922 10:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:39.922 10:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:39.922 10:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.922 10:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:39.922 10:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.922 10:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.922 10:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.922 10:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.922 10:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:40.181 00:21:40.181 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:40.181 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:40.181 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.439 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.439 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.439 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.439 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.439 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.439 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:40.439 { 00:21:40.439 "cntlid": 79, 00:21:40.439 "qid": 0, 00:21:40.439 "state": "enabled", 00:21:40.439 "thread": "nvmf_tgt_poll_group_000", 00:21:40.439 "listen_address": { 00:21:40.439 "trtype": "TCP", 00:21:40.439 "adrfam": "IPv4", 00:21:40.439 "traddr": "10.0.0.2", 00:21:40.439 "trsvcid": "4420" 00:21:40.439 }, 00:21:40.439 "peer_address": { 00:21:40.439 "trtype": "TCP", 00:21:40.439 "adrfam": "IPv4", 00:21:40.440 "traddr": "10.0.0.1", 00:21:40.440 "trsvcid": "57958" 00:21:40.440 }, 00:21:40.440 "auth": { 00:21:40.440 "state": "completed", 00:21:40.440 "digest": "sha384", 00:21:40.440 "dhgroup": "ffdhe4096" 00:21:40.440 } 00:21:40.440 } 00:21:40.440 ]' 00:21:40.440 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:40.440 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:40.440 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:40.440 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:40.440 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:40.440 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.440 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.440 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.697 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmVmOTA3YWVmYzBhYTEyYjUzY2E1YTY4N2FkYzE2Y2ZmYzdmNmY2ODk0ZTQxMmVlNzI0NzdiYmQ4YmI0NWNkM2xcN3c=: 00:21:41.264 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.264 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:41.264 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.264 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.264 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.264 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:41.264 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:41.264 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:41.264 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:41.523 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:21:41.523 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:41.523 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:41.523 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:41.523 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:41.523 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.523 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.523 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.523 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.523 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.523 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.523 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.782 00:21:41.782 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:41.782 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:41.782 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.041 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.041 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.041 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.041 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.041 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.041 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:42.041 { 00:21:42.041 "cntlid": 81, 00:21:42.041 "qid": 0, 00:21:42.041 "state": "enabled", 00:21:42.041 "thread": "nvmf_tgt_poll_group_000", 00:21:42.041 "listen_address": { 00:21:42.041 "trtype": "TCP", 00:21:42.041 "adrfam": "IPv4", 00:21:42.041 "traddr": "10.0.0.2", 00:21:42.041 "trsvcid": "4420" 00:21:42.041 }, 00:21:42.041 "peer_address": { 00:21:42.041 "trtype": "TCP", 00:21:42.041 "adrfam": "IPv4", 00:21:42.041 "traddr": "10.0.0.1", 00:21:42.041 "trsvcid": "57974" 00:21:42.041 }, 00:21:42.041 "auth": { 00:21:42.041 "state": "completed", 00:21:42.041 "digest": "sha384", 00:21:42.041 "dhgroup": "ffdhe6144" 00:21:42.041 } 00:21:42.041 } 00:21:42.041 ]' 00:21:42.041 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:42.041 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:42.041 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:42.041 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:42.041 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:42.041 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.042 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.042 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.299 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjE4YWMxNzZmOWVhNDdmMGIyNzA4NmIzZjY1MGIzZDVhMWUwZjU2ODVkMzc1NTNhiq/5aw==: --dhchap-ctrl-secret DHHC-1:03:ODgxOWM2M2JkYjJiOTA5N2ViZmNlZGJhOTllNzU5ZTk4ZGRiNzQ3NGI1M2JlYzY4MGIwNDM2NzA4NjU4ZDYwOLHYE6U=: 00:21:42.865 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.865 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:42.865 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.865 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.865 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.865 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:42.865 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:42.865 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:43.123 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:21:43.123 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:43.123 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:43.123 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:43.123 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:43.123 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.123 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.123 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.124 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.124 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.124 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.124 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.381 00:21:43.382 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:43.382 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.382 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:43.639 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.639 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.639 10:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.639 10:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.639 10:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.639 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:43.639 { 00:21:43.639 "cntlid": 83, 00:21:43.639 "qid": 0, 00:21:43.639 "state": "enabled", 00:21:43.639 "thread": "nvmf_tgt_poll_group_000", 00:21:43.639 "listen_address": { 00:21:43.639 "trtype": "TCP", 00:21:43.639 "adrfam": "IPv4", 00:21:43.639 "traddr": "10.0.0.2", 00:21:43.639 "trsvcid": "4420" 00:21:43.639 }, 00:21:43.639 "peer_address": { 00:21:43.639 "trtype": "TCP", 00:21:43.639 "adrfam": "IPv4", 00:21:43.639 "traddr": "10.0.0.1", 00:21:43.639 "trsvcid": "51290" 00:21:43.639 }, 00:21:43.639 "auth": { 00:21:43.639 "state": "completed", 00:21:43.639 "digest": "sha384", 00:21:43.639 "dhgroup": "ffdhe6144" 00:21:43.639 } 00:21:43.639 } 00:21:43.639 ]' 00:21:43.639 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:43.639 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:43.639 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:43.639 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:43.639 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:43.639 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.639 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.639 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.898 10:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjRiNjRjN2I5MDQwYjhkNTFkYTQ2ZGVkZWU4MzJjMDUr2Uou: --dhchap-ctrl-secret DHHC-1:02:MzA4NGQyMDY4OTY4NTM0MmRkNTBhYWExNTljZDhiNjczMDc2NjEwOWM4YzRlNGYxv2n4bA==: 00:21:44.465 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.465 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:44.465 10:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.465 10:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.465 10:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.465 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:44.465 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:44.465 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:44.723 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:21:44.723 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:44.723 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:44.723 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:44.723 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:44.723 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.723 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.723 10:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.723 10:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.723 10:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.723 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.723 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.981 00:21:44.981 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:44.981 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.982 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:45.240 10:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.240 10:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.240 10:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.240 10:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.240 10:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.240 10:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:45.240 { 00:21:45.240 "cntlid": 85, 00:21:45.240 "qid": 0, 00:21:45.240 "state": "enabled", 00:21:45.240 "thread": "nvmf_tgt_poll_group_000", 00:21:45.240 "listen_address": { 00:21:45.240 "trtype": "TCP", 00:21:45.240 "adrfam": "IPv4", 00:21:45.240 "traddr": "10.0.0.2", 00:21:45.240 "trsvcid": "4420" 00:21:45.240 }, 00:21:45.240 "peer_address": { 00:21:45.240 "trtype": "TCP", 00:21:45.240 "adrfam": "IPv4", 00:21:45.240 "traddr": "10.0.0.1", 00:21:45.240 "trsvcid": "51316" 00:21:45.240 }, 00:21:45.240 "auth": { 00:21:45.240 "state": "completed", 00:21:45.240 "digest": "sha384", 00:21:45.240 "dhgroup": "ffdhe6144" 00:21:45.240 } 00:21:45.240 } 00:21:45.240 ]' 00:21:45.240 10:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:45.240 10:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:45.240 10:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:45.240 10:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:45.240 10:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:45.240 10:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.240 10:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.240 10:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.499 10:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyYTNiYjZkOTg4OTBmOGI2NjU3OWFhM2ZmNmQ5OTUzZGE1N2NmNjJjYjc0MDhk96Cs+Q==: --dhchap-ctrl-secret DHHC-1:01:Y2YxM2VmZTk5MTIyYWUxMDliYjRlODcwYmYzYTc3OGb2bxF+: 00:21:46.067 10:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.067 10:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:46.067 10:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.067 10:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.067 10:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.067 10:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:46.067 10:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:46.067 10:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:46.327 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:21:46.327 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:46.327 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:46.327 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:46.327 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:46.327 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.327 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:46.327 10:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.327 10:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.327 10:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.327 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:46.327 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:46.586 00:21:46.586 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:46.586 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:46.586 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.845 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.845 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.845 10:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.845 10:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.845 10:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.845 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:46.845 { 00:21:46.845 "cntlid": 87, 00:21:46.845 "qid": 0, 00:21:46.845 "state": "enabled", 00:21:46.845 "thread": "nvmf_tgt_poll_group_000", 00:21:46.845 "listen_address": { 00:21:46.845 "trtype": "TCP", 00:21:46.845 "adrfam": "IPv4", 00:21:46.845 "traddr": "10.0.0.2", 00:21:46.846 "trsvcid": "4420" 00:21:46.846 }, 00:21:46.846 "peer_address": { 00:21:46.846 "trtype": "TCP", 00:21:46.846 "adrfam": "IPv4", 00:21:46.846 "traddr": "10.0.0.1", 00:21:46.846 "trsvcid": "51348" 00:21:46.846 }, 00:21:46.846 "auth": { 00:21:46.846 "state": "completed", 00:21:46.846 "digest": "sha384", 00:21:46.846 "dhgroup": "ffdhe6144" 00:21:46.846 } 00:21:46.846 } 00:21:46.846 ]' 00:21:46.846 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:46.846 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:46.846 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:46.846 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:46.846 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:46.846 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.846 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.846 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.105 10:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmVmOTA3YWVmYzBhYTEyYjUzY2E1YTY4N2FkYzE2Y2ZmYzdmNmY2ODk0ZTQxMmVlNzI0NzdiYmQ4YmI0NWNkM2xcN3c=: 00:21:47.673 10:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.673 10:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:47.673 10:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.673 10:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.673 10:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.673 10:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:47.673 10:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:47.673 10:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:47.673 10:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:47.932 10:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:21:47.932 10:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:47.932 10:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:47.932 10:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:47.932 10:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:47.932 10:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.932 10:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.932 10:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.932 10:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.932 10:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.933 10:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.933 10:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.499 00:21:48.499 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:48.499 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.499 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:48.499 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.499 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.499 10:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.499 10:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.499 10:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.499 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:48.499 { 00:21:48.499 "cntlid": 89, 00:21:48.499 "qid": 0, 00:21:48.499 "state": "enabled", 00:21:48.499 "thread": "nvmf_tgt_poll_group_000", 00:21:48.499 "listen_address": { 00:21:48.499 "trtype": "TCP", 00:21:48.499 "adrfam": "IPv4", 00:21:48.499 "traddr": "10.0.0.2", 00:21:48.499 "trsvcid": "4420" 00:21:48.499 }, 00:21:48.499 "peer_address": { 00:21:48.499 "trtype": "TCP", 00:21:48.499 "adrfam": "IPv4", 00:21:48.499 "traddr": "10.0.0.1", 00:21:48.499 "trsvcid": "51374" 00:21:48.499 }, 00:21:48.499 "auth": { 00:21:48.499 "state": "completed", 00:21:48.499 "digest": "sha384", 00:21:48.499 "dhgroup": "ffdhe8192" 00:21:48.499 } 00:21:48.499 } 00:21:48.499 ]' 00:21:48.500 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:48.757 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:48.757 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:48.757 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:48.757 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:48.757 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.757 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.757 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.016 10:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjE4YWMxNzZmOWVhNDdmMGIyNzA4NmIzZjY1MGIzZDVhMWUwZjU2ODVkMzc1NTNhiq/5aw==: --dhchap-ctrl-secret DHHC-1:03:ODgxOWM2M2JkYjJiOTA5N2ViZmNlZGJhOTllNzU5ZTk4ZGRiNzQ3NGI1M2JlYzY4MGIwNDM2NzA4NjU4ZDYwOLHYE6U=: 00:21:49.584 10:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.584 10:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:49.584 10:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.584 10:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.584 10:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.584 10:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:49.584 10:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:49.584 10:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:49.584 10:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:21:49.584 10:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:49.584 10:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:49.584 10:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:49.584 10:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:49.584 10:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.584 10:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.584 10:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.584 10:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.584 10:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.584 10:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.584 10:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.153 00:21:50.153 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:50.153 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:50.153 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.446 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.446 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.446 10:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.446 10:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.446 10:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.446 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:50.446 { 00:21:50.446 "cntlid": 91, 00:21:50.446 "qid": 0, 00:21:50.446 "state": "enabled", 00:21:50.446 "thread": "nvmf_tgt_poll_group_000", 00:21:50.446 "listen_address": { 00:21:50.446 "trtype": "TCP", 00:21:50.446 "adrfam": "IPv4", 00:21:50.446 "traddr": "10.0.0.2", 00:21:50.446 "trsvcid": "4420" 00:21:50.446 }, 00:21:50.446 "peer_address": { 00:21:50.446 "trtype": "TCP", 00:21:50.446 "adrfam": "IPv4", 00:21:50.446 "traddr": "10.0.0.1", 00:21:50.446 "trsvcid": "51398" 00:21:50.446 }, 00:21:50.446 "auth": { 00:21:50.446 "state": "completed", 00:21:50.446 "digest": "sha384", 00:21:50.446 "dhgroup": "ffdhe8192" 00:21:50.446 } 00:21:50.447 } 00:21:50.447 ]' 00:21:50.447 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:50.447 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:50.447 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:50.447 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:50.447 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:50.447 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.447 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.447 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.706 10:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjRiNjRjN2I5MDQwYjhkNTFkYTQ2ZGVkZWU4MzJjMDUr2Uou: --dhchap-ctrl-secret DHHC-1:02:MzA4NGQyMDY4OTY4NTM0MmRkNTBhYWExNTljZDhiNjczMDc2NjEwOWM4YzRlNGYxv2n4bA==: 00:21:51.275 10:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.275 10:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:51.275 10:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.275 10:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.275 10:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.275 10:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:51.275 10:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:51.275 10:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:51.534 10:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:21:51.535 10:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:51.535 10:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:51.535 10:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:51.535 10:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:51.535 10:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.535 10:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.535 10:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.535 10:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.535 10:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.535 10:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.535 10:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.794 00:21:51.794 10:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:51.794 10:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:51.794 10:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.053 10:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.053 10:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.053 10:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.053 10:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.053 10:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.053 10:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:52.053 { 00:21:52.053 "cntlid": 93, 00:21:52.053 "qid": 0, 00:21:52.053 "state": "enabled", 00:21:52.053 "thread": "nvmf_tgt_poll_group_000", 00:21:52.053 "listen_address": { 00:21:52.053 "trtype": "TCP", 00:21:52.053 "adrfam": "IPv4", 00:21:52.053 "traddr": "10.0.0.2", 00:21:52.053 "trsvcid": "4420" 00:21:52.053 }, 00:21:52.053 "peer_address": { 00:21:52.053 "trtype": "TCP", 00:21:52.053 "adrfam": "IPv4", 00:21:52.053 "traddr": "10.0.0.1", 00:21:52.053 "trsvcid": "51420" 00:21:52.054 }, 00:21:52.054 "auth": { 00:21:52.054 "state": "completed", 00:21:52.054 "digest": "sha384", 00:21:52.054 "dhgroup": "ffdhe8192" 00:21:52.054 } 00:21:52.054 } 00:21:52.054 ]' 00:21:52.054 10:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:52.054 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:52.054 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:52.312 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:52.312 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:52.312 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.312 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.312 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.312 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyYTNiYjZkOTg4OTBmOGI2NjU3OWFhM2ZmNmQ5OTUzZGE1N2NmNjJjYjc0MDhk96Cs+Q==: --dhchap-ctrl-secret DHHC-1:01:Y2YxM2VmZTk5MTIyYWUxMDliYjRlODcwYmYzYTc3OGb2bxF+: 00:21:52.880 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.880 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:52.880 10:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.880 10:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.880 10:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.880 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:52.880 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:52.880 10:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:53.139 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:21:53.139 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:53.139 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:53.139 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:53.139 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:53.139 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.139 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:53.139 10:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.139 10:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.139 10:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.139 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:53.139 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:53.708 00:21:53.708 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:53.708 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:53.708 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.967 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.967 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.967 10:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.967 10:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.967 10:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.967 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:53.967 { 00:21:53.967 "cntlid": 95, 00:21:53.967 "qid": 0, 00:21:53.967 "state": "enabled", 00:21:53.967 "thread": "nvmf_tgt_poll_group_000", 00:21:53.967 "listen_address": { 00:21:53.967 "trtype": "TCP", 00:21:53.967 "adrfam": "IPv4", 00:21:53.967 "traddr": "10.0.0.2", 00:21:53.967 "trsvcid": "4420" 00:21:53.967 }, 00:21:53.967 "peer_address": { 00:21:53.967 "trtype": "TCP", 00:21:53.967 "adrfam": "IPv4", 00:21:53.967 "traddr": "10.0.0.1", 00:21:53.967 "trsvcid": "48000" 00:21:53.967 }, 00:21:53.967 "auth": { 00:21:53.967 "state": "completed", 00:21:53.967 "digest": "sha384", 00:21:53.967 "dhgroup": "ffdhe8192" 00:21:53.967 } 00:21:53.967 } 00:21:53.967 ]' 00:21:53.967 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:53.967 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:53.967 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:53.967 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:53.967 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:53.967 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.967 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.967 10:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.226 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmVmOTA3YWVmYzBhYTEyYjUzY2E1YTY4N2FkYzE2Y2ZmYzdmNmY2ODk0ZTQxMmVlNzI0NzdiYmQ4YmI0NWNkM2xcN3c=: 00:21:54.795 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.795 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:54.795 10:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.795 10:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.795 10:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.795 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:54.795 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:54.795 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:54.795 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:54.795 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:55.054 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:55.054 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:55.054 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:55.054 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:55.054 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:55.054 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.054 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.054 10:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.054 10:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.054 10:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.054 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.054 10:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.312 00:21:55.312 10:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:55.312 10:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:55.312 10:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.312 10:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.312 10:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.312 10:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.312 10:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.312 10:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.312 10:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:55.312 { 00:21:55.312 "cntlid": 97, 00:21:55.312 "qid": 0, 00:21:55.312 "state": "enabled", 00:21:55.312 "thread": "nvmf_tgt_poll_group_000", 00:21:55.312 "listen_address": { 00:21:55.312 "trtype": "TCP", 00:21:55.312 "adrfam": "IPv4", 00:21:55.312 "traddr": "10.0.0.2", 00:21:55.312 "trsvcid": "4420" 00:21:55.312 }, 00:21:55.312 "peer_address": { 00:21:55.312 "trtype": "TCP", 00:21:55.312 "adrfam": "IPv4", 00:21:55.312 "traddr": "10.0.0.1", 00:21:55.312 "trsvcid": "48026" 00:21:55.312 }, 00:21:55.312 "auth": { 00:21:55.312 "state": "completed", 00:21:55.312 "digest": "sha512", 00:21:55.312 "dhgroup": "null" 00:21:55.312 } 00:21:55.312 } 00:21:55.312 ]' 00:21:55.312 10:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:55.571 10:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.571 10:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:55.571 10:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:55.571 10:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:55.571 10:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.571 10:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.571 10:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.830 10:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjE4YWMxNzZmOWVhNDdmMGIyNzA4NmIzZjY1MGIzZDVhMWUwZjU2ODVkMzc1NTNhiq/5aw==: --dhchap-ctrl-secret DHHC-1:03:ODgxOWM2M2JkYjJiOTA5N2ViZmNlZGJhOTllNzU5ZTk4ZGRiNzQ3NGI1M2JlYzY4MGIwNDM2NzA4NjU4ZDYwOLHYE6U=: 00:21:56.398 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.398 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:56.398 10:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.398 10:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.398 10:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.398 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:56.398 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:56.398 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:56.398 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:56.398 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:56.398 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:56.398 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:56.398 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:56.398 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.398 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.398 10:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.398 10:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.398 10:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.398 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.398 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.658 00:21:56.658 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:56.658 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:56.658 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.917 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.917 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.917 10:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.917 10:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.917 10:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.917 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:56.917 { 00:21:56.917 "cntlid": 99, 00:21:56.917 "qid": 0, 00:21:56.917 "state": "enabled", 00:21:56.917 "thread": "nvmf_tgt_poll_group_000", 00:21:56.917 "listen_address": { 00:21:56.917 "trtype": "TCP", 00:21:56.917 "adrfam": "IPv4", 00:21:56.917 "traddr": "10.0.0.2", 00:21:56.917 "trsvcid": "4420" 00:21:56.917 }, 00:21:56.917 "peer_address": { 00:21:56.917 "trtype": "TCP", 00:21:56.917 "adrfam": "IPv4", 00:21:56.917 "traddr": "10.0.0.1", 00:21:56.917 "trsvcid": "48050" 00:21:56.917 }, 00:21:56.917 "auth": { 00:21:56.917 "state": "completed", 00:21:56.917 "digest": "sha512", 00:21:56.917 "dhgroup": "null" 00:21:56.917 } 00:21:56.917 } 00:21:56.917 ]' 00:21:56.917 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:56.917 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.917 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:56.917 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:56.917 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:57.176 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.176 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.176 10:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.176 10:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjRiNjRjN2I5MDQwYjhkNTFkYTQ2ZGVkZWU4MzJjMDUr2Uou: --dhchap-ctrl-secret DHHC-1:02:MzA4NGQyMDY4OTY4NTM0MmRkNTBhYWExNTljZDhiNjczMDc2NjEwOWM4YzRlNGYxv2n4bA==: 00:21:57.743 10:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.743 10:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:57.743 10:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.743 10:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.743 10:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.743 10:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:57.743 10:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:57.743 10:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:58.002 10:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:58.002 10:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:58.002 10:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:58.002 10:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:58.002 10:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:58.002 10:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.002 10:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.002 10:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.002 10:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.002 10:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.002 10:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.002 10:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.261 00:21:58.261 10:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:58.261 10:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:58.261 10:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.520 10:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.520 10:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.520 10:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.520 10:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.520 10:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.520 10:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:58.520 { 00:21:58.520 "cntlid": 101, 00:21:58.520 "qid": 0, 00:21:58.520 "state": "enabled", 00:21:58.520 "thread": "nvmf_tgt_poll_group_000", 00:21:58.520 "listen_address": { 00:21:58.520 "trtype": "TCP", 00:21:58.520 "adrfam": "IPv4", 00:21:58.520 "traddr": "10.0.0.2", 00:21:58.520 "trsvcid": "4420" 00:21:58.520 }, 00:21:58.520 "peer_address": { 00:21:58.520 "trtype": "TCP", 00:21:58.520 "adrfam": "IPv4", 00:21:58.520 "traddr": "10.0.0.1", 00:21:58.520 "trsvcid": "48058" 00:21:58.520 }, 00:21:58.520 "auth": { 00:21:58.520 "state": "completed", 00:21:58.520 "digest": "sha512", 00:21:58.520 "dhgroup": "null" 00:21:58.520 } 00:21:58.520 } 00:21:58.520 ]' 00:21:58.520 10:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:58.520 10:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.520 10:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:58.520 10:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:58.520 10:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:58.520 10:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.520 10:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.520 10:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.778 10:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyYTNiYjZkOTg4OTBmOGI2NjU3OWFhM2ZmNmQ5OTUzZGE1N2NmNjJjYjc0MDhk96Cs+Q==: --dhchap-ctrl-secret DHHC-1:01:Y2YxM2VmZTk5MTIyYWUxMDliYjRlODcwYmYzYTc3OGb2bxF+: 00:21:59.347 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.347 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:59.347 10:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.347 10:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.347 10:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.347 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:59.347 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:59.347 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:59.347 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:59.347 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:59.347 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:59.347 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:59.347 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:59.347 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.347 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:59.347 10:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.347 10:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.347 10:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.347 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:59.347 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:59.606 00:21:59.607 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:59.607 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:59.607 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.865 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.865 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.865 10:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.865 10:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.865 10:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.865 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:59.865 { 00:21:59.865 "cntlid": 103, 00:21:59.865 "qid": 0, 00:21:59.865 "state": "enabled", 00:21:59.865 "thread": "nvmf_tgt_poll_group_000", 00:21:59.865 "listen_address": { 00:21:59.865 "trtype": "TCP", 00:21:59.865 "adrfam": "IPv4", 00:21:59.865 "traddr": "10.0.0.2", 00:21:59.865 "trsvcid": "4420" 00:21:59.865 }, 00:21:59.865 "peer_address": { 00:21:59.865 "trtype": "TCP", 00:21:59.865 "adrfam": "IPv4", 00:21:59.865 "traddr": "10.0.0.1", 00:21:59.865 "trsvcid": "48086" 00:21:59.865 }, 00:21:59.865 "auth": { 00:21:59.865 "state": "completed", 00:21:59.865 "digest": "sha512", 00:21:59.865 "dhgroup": "null" 00:21:59.865 } 00:21:59.865 } 00:21:59.865 ]' 00:21:59.865 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:59.865 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.865 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:59.865 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:59.865 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:59.865 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.865 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.865 10:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.125 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmVmOTA3YWVmYzBhYTEyYjUzY2E1YTY4N2FkYzE2Y2ZmYzdmNmY2ODk0ZTQxMmVlNzI0NzdiYmQ4YmI0NWNkM2xcN3c=: 00:22:00.704 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.704 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:00.704 10:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.704 10:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.704 10:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.704 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:00.704 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:00.704 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:00.704 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:00.963 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:22:00.963 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:00.963 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:00.963 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:00.963 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:00.963 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.963 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.963 10:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.963 10:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.963 10:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.963 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.963 10:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.221 00:22:01.221 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:01.221 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:01.221 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.481 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.481 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.481 10:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.481 10:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.481 10:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.481 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:01.481 { 00:22:01.481 "cntlid": 105, 00:22:01.481 "qid": 0, 00:22:01.481 "state": "enabled", 00:22:01.481 "thread": "nvmf_tgt_poll_group_000", 00:22:01.481 "listen_address": { 00:22:01.481 "trtype": "TCP", 00:22:01.481 "adrfam": "IPv4", 00:22:01.481 "traddr": "10.0.0.2", 00:22:01.481 "trsvcid": "4420" 00:22:01.481 }, 00:22:01.481 "peer_address": { 00:22:01.481 "trtype": "TCP", 00:22:01.481 "adrfam": "IPv4", 00:22:01.481 "traddr": "10.0.0.1", 00:22:01.481 "trsvcid": "48118" 00:22:01.481 }, 00:22:01.481 "auth": { 00:22:01.481 "state": "completed", 00:22:01.481 "digest": "sha512", 00:22:01.481 "dhgroup": "ffdhe2048" 00:22:01.481 } 00:22:01.481 } 00:22:01.481 ]' 00:22:01.481 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:01.481 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.481 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:01.481 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:01.481 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:01.481 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.481 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.481 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.739 10:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjE4YWMxNzZmOWVhNDdmMGIyNzA4NmIzZjY1MGIzZDVhMWUwZjU2ODVkMzc1NTNhiq/5aw==: --dhchap-ctrl-secret DHHC-1:03:ODgxOWM2M2JkYjJiOTA5N2ViZmNlZGJhOTllNzU5ZTk4ZGRiNzQ3NGI1M2JlYzY4MGIwNDM2NzA4NjU4ZDYwOLHYE6U=: 00:22:02.306 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.306 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:02.306 10:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.306 10:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.306 10:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.306 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:02.306 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:02.306 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:02.566 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:22:02.566 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:02.566 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:02.566 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:02.566 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:02.566 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.566 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.566 10:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.566 10:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.566 10:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.566 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.566 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.825 00:22:02.825 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:02.825 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.825 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:02.825 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.825 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.825 10:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.825 10:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.825 10:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.825 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:02.825 { 00:22:02.825 "cntlid": 107, 00:22:02.825 "qid": 0, 00:22:02.825 "state": "enabled", 00:22:02.825 "thread": "nvmf_tgt_poll_group_000", 00:22:02.825 "listen_address": { 00:22:02.825 "trtype": "TCP", 00:22:02.825 "adrfam": "IPv4", 00:22:02.825 "traddr": "10.0.0.2", 00:22:02.825 "trsvcid": "4420" 00:22:02.825 }, 00:22:02.825 "peer_address": { 00:22:02.825 "trtype": "TCP", 00:22:02.825 "adrfam": "IPv4", 00:22:02.825 "traddr": "10.0.0.1", 00:22:02.825 "trsvcid": "57998" 00:22:02.825 }, 00:22:02.825 "auth": { 00:22:02.825 "state": "completed", 00:22:02.825 "digest": "sha512", 00:22:02.825 "dhgroup": "ffdhe2048" 00:22:02.825 } 00:22:02.825 } 00:22:02.825 ]' 00:22:02.825 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:03.084 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.084 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:03.084 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:03.084 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:03.084 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.084 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.084 10:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.343 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjRiNjRjN2I5MDQwYjhkNTFkYTQ2ZGVkZWU4MzJjMDUr2Uou: --dhchap-ctrl-secret DHHC-1:02:MzA4NGQyMDY4OTY4NTM0MmRkNTBhYWExNTljZDhiNjczMDc2NjEwOWM4YzRlNGYxv2n4bA==: 00:22:03.911 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.911 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:03.911 10:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.911 10:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.911 10:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.911 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:03.911 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:03.911 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:03.911 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:22:03.911 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:03.911 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:03.911 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:03.911 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:03.911 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.911 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.911 10:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.911 10:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.911 10:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.911 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.911 10:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.170 00:22:04.170 10:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:04.170 10:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:04.170 10:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.429 10:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.429 10:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.429 10:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.429 10:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.429 10:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.429 10:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:04.429 { 00:22:04.429 "cntlid": 109, 00:22:04.429 "qid": 0, 00:22:04.429 "state": "enabled", 00:22:04.429 "thread": "nvmf_tgt_poll_group_000", 00:22:04.429 "listen_address": { 00:22:04.429 "trtype": "TCP", 00:22:04.429 "adrfam": "IPv4", 00:22:04.429 "traddr": "10.0.0.2", 00:22:04.429 "trsvcid": "4420" 00:22:04.429 }, 00:22:04.429 "peer_address": { 00:22:04.429 "trtype": "TCP", 00:22:04.429 "adrfam": "IPv4", 00:22:04.429 "traddr": "10.0.0.1", 00:22:04.429 "trsvcid": "58016" 00:22:04.429 }, 00:22:04.429 "auth": { 00:22:04.429 "state": "completed", 00:22:04.429 "digest": "sha512", 00:22:04.429 "dhgroup": "ffdhe2048" 00:22:04.429 } 00:22:04.429 } 00:22:04.429 ]' 00:22:04.429 10:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:04.429 10:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.429 10:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:04.429 10:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:04.429 10:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:04.690 10:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.690 10:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.690 10:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.690 10:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyYTNiYjZkOTg4OTBmOGI2NjU3OWFhM2ZmNmQ5OTUzZGE1N2NmNjJjYjc0MDhk96Cs+Q==: --dhchap-ctrl-secret DHHC-1:01:Y2YxM2VmZTk5MTIyYWUxMDliYjRlODcwYmYzYTc3OGb2bxF+: 00:22:05.258 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.258 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:05.258 10:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.258 10:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.258 10:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.258 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:05.258 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:05.258 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:05.517 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:22:05.517 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:05.517 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:05.517 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:05.517 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:05.517 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.517 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:05.517 10:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.517 10:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.517 10:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.517 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:05.517 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:05.776 00:22:05.776 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:05.776 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:05.776 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.035 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.035 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.035 10:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.035 10:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.035 10:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.035 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:06.035 { 00:22:06.035 "cntlid": 111, 00:22:06.035 "qid": 0, 00:22:06.035 "state": "enabled", 00:22:06.035 "thread": "nvmf_tgt_poll_group_000", 00:22:06.035 "listen_address": { 00:22:06.035 "trtype": "TCP", 00:22:06.035 "adrfam": "IPv4", 00:22:06.035 "traddr": "10.0.0.2", 00:22:06.035 "trsvcid": "4420" 00:22:06.035 }, 00:22:06.035 "peer_address": { 00:22:06.035 "trtype": "TCP", 00:22:06.035 "adrfam": "IPv4", 00:22:06.035 "traddr": "10.0.0.1", 00:22:06.035 "trsvcid": "58044" 00:22:06.035 }, 00:22:06.035 "auth": { 00:22:06.035 "state": "completed", 00:22:06.035 "digest": "sha512", 00:22:06.035 "dhgroup": "ffdhe2048" 00:22:06.035 } 00:22:06.035 } 00:22:06.035 ]' 00:22:06.035 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:06.035 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.035 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:06.035 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:06.035 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:06.035 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.035 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.035 10:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.294 10:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmVmOTA3YWVmYzBhYTEyYjUzY2E1YTY4N2FkYzE2Y2ZmYzdmNmY2ODk0ZTQxMmVlNzI0NzdiYmQ4YmI0NWNkM2xcN3c=: 00:22:06.862 10:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.862 10:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:06.862 10:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.862 10:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.862 10:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.862 10:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:06.862 10:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:06.862 10:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:06.862 10:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:07.121 10:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:22:07.121 10:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:07.121 10:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:07.121 10:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:07.121 10:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:07.121 10:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.121 10:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.121 10:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.121 10:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.121 10:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.121 10:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.121 10:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.381 00:22:07.381 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:07.381 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:07.381 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.381 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.381 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.381 10:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.381 10:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.381 10:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.381 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:07.381 { 00:22:07.381 "cntlid": 113, 00:22:07.381 "qid": 0, 00:22:07.381 "state": "enabled", 00:22:07.381 "thread": "nvmf_tgt_poll_group_000", 00:22:07.381 "listen_address": { 00:22:07.381 "trtype": "TCP", 00:22:07.381 "adrfam": "IPv4", 00:22:07.381 "traddr": "10.0.0.2", 00:22:07.381 "trsvcid": "4420" 00:22:07.381 }, 00:22:07.381 "peer_address": { 00:22:07.381 "trtype": "TCP", 00:22:07.381 "adrfam": "IPv4", 00:22:07.381 "traddr": "10.0.0.1", 00:22:07.381 "trsvcid": "58076" 00:22:07.381 }, 00:22:07.381 "auth": { 00:22:07.381 "state": "completed", 00:22:07.381 "digest": "sha512", 00:22:07.381 "dhgroup": "ffdhe3072" 00:22:07.381 } 00:22:07.381 } 00:22:07.381 ]' 00:22:07.381 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:07.640 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:07.640 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:07.640 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:07.640 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:07.640 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.640 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.640 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.899 10:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjE4YWMxNzZmOWVhNDdmMGIyNzA4NmIzZjY1MGIzZDVhMWUwZjU2ODVkMzc1NTNhiq/5aw==: --dhchap-ctrl-secret DHHC-1:03:ODgxOWM2M2JkYjJiOTA5N2ViZmNlZGJhOTllNzU5ZTk4ZGRiNzQ3NGI1M2JlYzY4MGIwNDM2NzA4NjU4ZDYwOLHYE6U=: 00:22:08.466 10:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.466 10:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:08.466 10:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.466 10:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.466 10:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.466 10:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:08.466 10:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:08.466 10:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:08.466 10:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:22:08.466 10:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:08.466 10:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:08.466 10:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:08.466 10:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:08.466 10:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.466 10:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.466 10:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.466 10:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.466 10:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.466 10:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.466 10:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.724 00:22:08.724 10:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:08.724 10:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:08.724 10:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.983 10:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.983 10:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.983 10:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.983 10:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.983 10:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.983 10:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:08.983 { 00:22:08.983 "cntlid": 115, 00:22:08.983 "qid": 0, 00:22:08.983 "state": "enabled", 00:22:08.983 "thread": "nvmf_tgt_poll_group_000", 00:22:08.983 "listen_address": { 00:22:08.983 "trtype": "TCP", 00:22:08.983 "adrfam": "IPv4", 00:22:08.983 "traddr": "10.0.0.2", 00:22:08.983 "trsvcid": "4420" 00:22:08.983 }, 00:22:08.983 "peer_address": { 00:22:08.983 "trtype": "TCP", 00:22:08.983 "adrfam": "IPv4", 00:22:08.983 "traddr": "10.0.0.1", 00:22:08.983 "trsvcid": "58114" 00:22:08.983 }, 00:22:08.983 "auth": { 00:22:08.983 "state": "completed", 00:22:08.983 "digest": "sha512", 00:22:08.983 "dhgroup": "ffdhe3072" 00:22:08.983 } 00:22:08.983 } 00:22:08.983 ]' 00:22:08.983 10:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:08.983 10:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.983 10:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:08.983 10:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:08.983 10:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:09.242 10:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.242 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.242 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.242 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjRiNjRjN2I5MDQwYjhkNTFkYTQ2ZGVkZWU4MzJjMDUr2Uou: --dhchap-ctrl-secret DHHC-1:02:MzA4NGQyMDY4OTY4NTM0MmRkNTBhYWExNTljZDhiNjczMDc2NjEwOWM4YzRlNGYxv2n4bA==: 00:22:09.810 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.810 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:09.810 10:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.810 10:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.810 10:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.810 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:09.810 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:09.810 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:10.069 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:22:10.069 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:10.069 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:10.069 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:10.069 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:10.069 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.069 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.069 10:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.069 10:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.069 10:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.070 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.070 10:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.349 00:22:10.349 10:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:10.349 10:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.349 10:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:10.608 10:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.608 10:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.608 10:30:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.608 10:30:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.608 10:30:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.608 10:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:10.608 { 00:22:10.608 "cntlid": 117, 00:22:10.608 "qid": 0, 00:22:10.608 "state": "enabled", 00:22:10.608 "thread": "nvmf_tgt_poll_group_000", 00:22:10.608 "listen_address": { 00:22:10.608 "trtype": "TCP", 00:22:10.608 "adrfam": "IPv4", 00:22:10.608 "traddr": "10.0.0.2", 00:22:10.608 "trsvcid": "4420" 00:22:10.608 }, 00:22:10.608 "peer_address": { 00:22:10.608 "trtype": "TCP", 00:22:10.608 "adrfam": "IPv4", 00:22:10.608 "traddr": "10.0.0.1", 00:22:10.608 "trsvcid": "58146" 00:22:10.608 }, 00:22:10.608 "auth": { 00:22:10.608 "state": "completed", 00:22:10.608 "digest": "sha512", 00:22:10.608 "dhgroup": "ffdhe3072" 00:22:10.608 } 00:22:10.608 } 00:22:10.608 ]' 00:22:10.608 10:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:10.608 10:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.608 10:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:10.608 10:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:10.608 10:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:10.608 10:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.608 10:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.608 10:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.868 10:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyYTNiYjZkOTg4OTBmOGI2NjU3OWFhM2ZmNmQ5OTUzZGE1N2NmNjJjYjc0MDhk96Cs+Q==: --dhchap-ctrl-secret DHHC-1:01:Y2YxM2VmZTk5MTIyYWUxMDliYjRlODcwYmYzYTc3OGb2bxF+: 00:22:11.436 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.436 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:11.436 10:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.436 10:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.436 10:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.436 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:11.436 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:11.436 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:11.695 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:22:11.695 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:11.695 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:11.695 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:11.695 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:11.695 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.695 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:11.695 10:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.695 10:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.695 10:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.695 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:11.695 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:11.954 00:22:11.954 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:11.954 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.954 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:11.954 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.954 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.954 10:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.954 10:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.954 10:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.954 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:11.954 { 00:22:11.954 "cntlid": 119, 00:22:11.954 "qid": 0, 00:22:11.954 "state": "enabled", 00:22:11.954 "thread": "nvmf_tgt_poll_group_000", 00:22:11.954 "listen_address": { 00:22:11.954 "trtype": "TCP", 00:22:11.954 "adrfam": "IPv4", 00:22:11.954 "traddr": "10.0.0.2", 00:22:11.954 "trsvcid": "4420" 00:22:11.954 }, 00:22:11.954 "peer_address": { 00:22:11.954 "trtype": "TCP", 00:22:11.954 "adrfam": "IPv4", 00:22:11.954 "traddr": "10.0.0.1", 00:22:11.954 "trsvcid": "58166" 00:22:11.954 }, 00:22:11.954 "auth": { 00:22:11.954 "state": "completed", 00:22:11.954 "digest": "sha512", 00:22:11.954 "dhgroup": "ffdhe3072" 00:22:11.954 } 00:22:11.954 } 00:22:11.954 ]' 00:22:11.954 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:12.213 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:12.213 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:12.213 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:12.213 10:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:12.213 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.213 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.213 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.472 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmVmOTA3YWVmYzBhYTEyYjUzY2E1YTY4N2FkYzE2Y2ZmYzdmNmY2ODk0ZTQxMmVlNzI0NzdiYmQ4YmI0NWNkM2xcN3c=: 00:22:13.041 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.041 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:13.041 10:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.041 10:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.041 10:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.041 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:13.041 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:13.041 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:13.041 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:13.041 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:22:13.041 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:13.041 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:13.041 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:13.041 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:13.041 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.041 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.041 10:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.041 10:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.041 10:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.041 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.041 10:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.300 00:22:13.300 10:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:13.300 10:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:13.300 10:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.559 10:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.559 10:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.559 10:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.559 10:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.559 10:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.559 10:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:13.559 { 00:22:13.559 "cntlid": 121, 00:22:13.559 "qid": 0, 00:22:13.559 "state": "enabled", 00:22:13.559 "thread": "nvmf_tgt_poll_group_000", 00:22:13.559 "listen_address": { 00:22:13.559 "trtype": "TCP", 00:22:13.559 "adrfam": "IPv4", 00:22:13.559 "traddr": "10.0.0.2", 00:22:13.559 "trsvcid": "4420" 00:22:13.559 }, 00:22:13.559 "peer_address": { 00:22:13.559 "trtype": "TCP", 00:22:13.559 "adrfam": "IPv4", 00:22:13.559 "traddr": "10.0.0.1", 00:22:13.559 "trsvcid": "58012" 00:22:13.559 }, 00:22:13.560 "auth": { 00:22:13.560 "state": "completed", 00:22:13.560 "digest": "sha512", 00:22:13.560 "dhgroup": "ffdhe4096" 00:22:13.560 } 00:22:13.560 } 00:22:13.560 ]' 00:22:13.560 10:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:13.560 10:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:13.560 10:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:13.818 10:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:13.818 10:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:13.818 10:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.818 10:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.818 10:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.818 10:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjE4YWMxNzZmOWVhNDdmMGIyNzA4NmIzZjY1MGIzZDVhMWUwZjU2ODVkMzc1NTNhiq/5aw==: --dhchap-ctrl-secret DHHC-1:03:ODgxOWM2M2JkYjJiOTA5N2ViZmNlZGJhOTllNzU5ZTk4ZGRiNzQ3NGI1M2JlYzY4MGIwNDM2NzA4NjU4ZDYwOLHYE6U=: 00:22:14.385 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.386 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:14.386 10:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.386 10:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.386 10:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.386 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:14.386 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:14.386 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:14.644 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:22:14.644 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:14.644 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:14.644 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:14.644 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:14.644 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.644 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.644 10:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.644 10:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.644 10:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.644 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.644 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.902 00:22:14.902 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:14.902 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.902 10:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:15.161 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.161 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.161 10:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.161 10:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.161 10:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.161 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:15.161 { 00:22:15.161 "cntlid": 123, 00:22:15.161 "qid": 0, 00:22:15.161 "state": "enabled", 00:22:15.161 "thread": "nvmf_tgt_poll_group_000", 00:22:15.161 "listen_address": { 00:22:15.161 "trtype": "TCP", 00:22:15.161 "adrfam": "IPv4", 00:22:15.161 "traddr": "10.0.0.2", 00:22:15.161 "trsvcid": "4420" 00:22:15.161 }, 00:22:15.161 "peer_address": { 00:22:15.161 "trtype": "TCP", 00:22:15.161 "adrfam": "IPv4", 00:22:15.161 "traddr": "10.0.0.1", 00:22:15.161 "trsvcid": "58046" 00:22:15.161 }, 00:22:15.161 "auth": { 00:22:15.161 "state": "completed", 00:22:15.161 "digest": "sha512", 00:22:15.161 "dhgroup": "ffdhe4096" 00:22:15.161 } 00:22:15.161 } 00:22:15.161 ]' 00:22:15.161 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:15.161 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:15.161 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:15.161 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:15.161 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:15.419 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.419 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.419 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.419 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjRiNjRjN2I5MDQwYjhkNTFkYTQ2ZGVkZWU4MzJjMDUr2Uou: --dhchap-ctrl-secret DHHC-1:02:MzA4NGQyMDY4OTY4NTM0MmRkNTBhYWExNTljZDhiNjczMDc2NjEwOWM4YzRlNGYxv2n4bA==: 00:22:15.986 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.986 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:15.986 10:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.986 10:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.986 10:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.986 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:15.986 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:15.986 10:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:16.245 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:22:16.245 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:16.245 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:16.245 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:16.245 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:16.245 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.245 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:16.245 10:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.245 10:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.245 10:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.245 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:16.245 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:16.503 00:22:16.503 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:16.503 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:16.503 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.762 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.762 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.762 10:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.762 10:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.762 10:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.762 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:16.762 { 00:22:16.762 "cntlid": 125, 00:22:16.762 "qid": 0, 00:22:16.762 "state": "enabled", 00:22:16.762 "thread": "nvmf_tgt_poll_group_000", 00:22:16.762 "listen_address": { 00:22:16.762 "trtype": "TCP", 00:22:16.762 "adrfam": "IPv4", 00:22:16.762 "traddr": "10.0.0.2", 00:22:16.762 "trsvcid": "4420" 00:22:16.762 }, 00:22:16.762 "peer_address": { 00:22:16.762 "trtype": "TCP", 00:22:16.762 "adrfam": "IPv4", 00:22:16.762 "traddr": "10.0.0.1", 00:22:16.762 "trsvcid": "58072" 00:22:16.762 }, 00:22:16.762 "auth": { 00:22:16.762 "state": "completed", 00:22:16.762 "digest": "sha512", 00:22:16.762 "dhgroup": "ffdhe4096" 00:22:16.762 } 00:22:16.762 } 00:22:16.762 ]' 00:22:16.762 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:16.763 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:16.763 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:16.763 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:16.763 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:16.763 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.763 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.763 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.022 10:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyYTNiYjZkOTg4OTBmOGI2NjU3OWFhM2ZmNmQ5OTUzZGE1N2NmNjJjYjc0MDhk96Cs+Q==: --dhchap-ctrl-secret DHHC-1:01:Y2YxM2VmZTk5MTIyYWUxMDliYjRlODcwYmYzYTc3OGb2bxF+: 00:22:17.589 10:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.589 10:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:17.589 10:31:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.589 10:31:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.589 10:31:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.589 10:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:17.589 10:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:17.589 10:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:17.847 10:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:22:17.847 10:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:17.847 10:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:17.847 10:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:17.847 10:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:17.847 10:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.848 10:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:17.848 10:31:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.848 10:31:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.848 10:31:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.848 10:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:17.848 10:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:18.105 00:22:18.105 10:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:18.105 10:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:18.105 10:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.364 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.364 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.364 10:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.364 10:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.364 10:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.364 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:18.364 { 00:22:18.365 "cntlid": 127, 00:22:18.365 "qid": 0, 00:22:18.365 "state": "enabled", 00:22:18.365 "thread": "nvmf_tgt_poll_group_000", 00:22:18.365 "listen_address": { 00:22:18.365 "trtype": "TCP", 00:22:18.365 "adrfam": "IPv4", 00:22:18.365 "traddr": "10.0.0.2", 00:22:18.365 "trsvcid": "4420" 00:22:18.365 }, 00:22:18.365 "peer_address": { 00:22:18.365 "trtype": "TCP", 00:22:18.365 "adrfam": "IPv4", 00:22:18.365 "traddr": "10.0.0.1", 00:22:18.365 "trsvcid": "58104" 00:22:18.365 }, 00:22:18.365 "auth": { 00:22:18.365 "state": "completed", 00:22:18.365 "digest": "sha512", 00:22:18.365 "dhgroup": "ffdhe4096" 00:22:18.365 } 00:22:18.365 } 00:22:18.365 ]' 00:22:18.365 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:18.365 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:18.365 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:18.365 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:18.365 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:18.365 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.365 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.365 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.623 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmVmOTA3YWVmYzBhYTEyYjUzY2E1YTY4N2FkYzE2Y2ZmYzdmNmY2ODk0ZTQxMmVlNzI0NzdiYmQ4YmI0NWNkM2xcN3c=: 00:22:19.245 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.245 10:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:19.245 10:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.245 10:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.245 10:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.245 10:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:19.245 10:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:19.245 10:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:19.245 10:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:19.246 10:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:22:19.246 10:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:19.246 10:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:19.246 10:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:19.246 10:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:19.246 10:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.246 10:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.246 10:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.246 10:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.246 10:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.246 10:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.246 10:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.812 00:22:19.812 10:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:19.812 10:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:19.812 10:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.812 10:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.812 10:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.812 10:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.812 10:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.812 10:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.812 10:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:19.812 { 00:22:19.812 "cntlid": 129, 00:22:19.812 "qid": 0, 00:22:19.812 "state": "enabled", 00:22:19.812 "thread": "nvmf_tgt_poll_group_000", 00:22:19.812 "listen_address": { 00:22:19.812 "trtype": "TCP", 00:22:19.812 "adrfam": "IPv4", 00:22:19.812 "traddr": "10.0.0.2", 00:22:19.812 "trsvcid": "4420" 00:22:19.812 }, 00:22:19.812 "peer_address": { 00:22:19.812 "trtype": "TCP", 00:22:19.812 "adrfam": "IPv4", 00:22:19.812 "traddr": "10.0.0.1", 00:22:19.812 "trsvcid": "58136" 00:22:19.812 }, 00:22:19.812 "auth": { 00:22:19.812 "state": "completed", 00:22:19.812 "digest": "sha512", 00:22:19.812 "dhgroup": "ffdhe6144" 00:22:19.812 } 00:22:19.812 } 00:22:19.812 ]' 00:22:19.812 10:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:19.812 10:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:19.812 10:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:20.071 10:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:20.071 10:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:20.071 10:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.071 10:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.071 10:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.071 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjE4YWMxNzZmOWVhNDdmMGIyNzA4NmIzZjY1MGIzZDVhMWUwZjU2ODVkMzc1NTNhiq/5aw==: --dhchap-ctrl-secret DHHC-1:03:ODgxOWM2M2JkYjJiOTA5N2ViZmNlZGJhOTllNzU5ZTk4ZGRiNzQ3NGI1M2JlYzY4MGIwNDM2NzA4NjU4ZDYwOLHYE6U=: 00:22:20.636 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.636 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:20.636 10:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.636 10:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.636 10:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.636 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:20.636 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:20.636 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:20.894 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:22:20.894 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:20.894 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:20.895 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:20.895 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:20.895 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.895 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.895 10:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.895 10:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.895 10:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.895 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.895 10:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.153 00:22:21.153 10:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:21.153 10:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:21.153 10:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.410 10:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.410 10:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.410 10:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.410 10:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.411 10:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.411 10:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:21.411 { 00:22:21.411 "cntlid": 131, 00:22:21.411 "qid": 0, 00:22:21.411 "state": "enabled", 00:22:21.411 "thread": "nvmf_tgt_poll_group_000", 00:22:21.411 "listen_address": { 00:22:21.411 "trtype": "TCP", 00:22:21.411 "adrfam": "IPv4", 00:22:21.411 "traddr": "10.0.0.2", 00:22:21.411 "trsvcid": "4420" 00:22:21.411 }, 00:22:21.411 "peer_address": { 00:22:21.411 "trtype": "TCP", 00:22:21.411 "adrfam": "IPv4", 00:22:21.411 "traddr": "10.0.0.1", 00:22:21.411 "trsvcid": "58158" 00:22:21.411 }, 00:22:21.411 "auth": { 00:22:21.411 "state": "completed", 00:22:21.411 "digest": "sha512", 00:22:21.411 "dhgroup": "ffdhe6144" 00:22:21.411 } 00:22:21.411 } 00:22:21.411 ]' 00:22:21.411 10:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:21.411 10:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:21.411 10:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:21.668 10:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:21.668 10:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:21.668 10:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.668 10:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.668 10:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.668 10:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjRiNjRjN2I5MDQwYjhkNTFkYTQ2ZGVkZWU4MzJjMDUr2Uou: --dhchap-ctrl-secret DHHC-1:02:MzA4NGQyMDY4OTY4NTM0MmRkNTBhYWExNTljZDhiNjczMDc2NjEwOWM4YzRlNGYxv2n4bA==: 00:22:22.233 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.490 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:22.490 10:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.490 10:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.490 10:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.490 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:22.490 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:22.490 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:22.490 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:22:22.490 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:22.491 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:22.491 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:22.491 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:22.491 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.491 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.491 10:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.491 10:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.491 10:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.491 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.491 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.056 00:22:23.056 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:23.056 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:23.056 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.056 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.056 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.056 10:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.056 10:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.056 10:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.056 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:23.056 { 00:22:23.056 "cntlid": 133, 00:22:23.056 "qid": 0, 00:22:23.056 "state": "enabled", 00:22:23.056 "thread": "nvmf_tgt_poll_group_000", 00:22:23.056 "listen_address": { 00:22:23.056 "trtype": "TCP", 00:22:23.056 "adrfam": "IPv4", 00:22:23.056 "traddr": "10.0.0.2", 00:22:23.056 "trsvcid": "4420" 00:22:23.056 }, 00:22:23.056 "peer_address": { 00:22:23.056 "trtype": "TCP", 00:22:23.056 "adrfam": "IPv4", 00:22:23.056 "traddr": "10.0.0.1", 00:22:23.056 "trsvcid": "59504" 00:22:23.056 }, 00:22:23.056 "auth": { 00:22:23.056 "state": "completed", 00:22:23.056 "digest": "sha512", 00:22:23.056 "dhgroup": "ffdhe6144" 00:22:23.056 } 00:22:23.056 } 00:22:23.056 ]' 00:22:23.056 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:23.056 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:23.056 10:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:23.056 10:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:23.056 10:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:23.313 10:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.313 10:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.313 10:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.313 10:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyYTNiYjZkOTg4OTBmOGI2NjU3OWFhM2ZmNmQ5OTUzZGE1N2NmNjJjYjc0MDhk96Cs+Q==: --dhchap-ctrl-secret DHHC-1:01:Y2YxM2VmZTk5MTIyYWUxMDliYjRlODcwYmYzYTc3OGb2bxF+: 00:22:23.880 10:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.880 10:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:23.880 10:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.880 10:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.880 10:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.880 10:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:23.880 10:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:23.880 10:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:24.140 10:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:22:24.140 10:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:24.140 10:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:24.140 10:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:24.140 10:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:24.140 10:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.140 10:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:24.140 10:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.140 10:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.140 10:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.140 10:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:24.140 10:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:24.399 00:22:24.399 10:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:24.399 10:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:24.399 10:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.658 10:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.658 10:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.658 10:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.658 10:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.658 10:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.658 10:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:24.658 { 00:22:24.658 "cntlid": 135, 00:22:24.658 "qid": 0, 00:22:24.658 "state": "enabled", 00:22:24.658 "thread": "nvmf_tgt_poll_group_000", 00:22:24.658 "listen_address": { 00:22:24.658 "trtype": "TCP", 00:22:24.658 "adrfam": "IPv4", 00:22:24.658 "traddr": "10.0.0.2", 00:22:24.658 "trsvcid": "4420" 00:22:24.658 }, 00:22:24.658 "peer_address": { 00:22:24.658 "trtype": "TCP", 00:22:24.658 "adrfam": "IPv4", 00:22:24.658 "traddr": "10.0.0.1", 00:22:24.658 "trsvcid": "59532" 00:22:24.658 }, 00:22:24.658 "auth": { 00:22:24.658 "state": "completed", 00:22:24.658 "digest": "sha512", 00:22:24.658 "dhgroup": "ffdhe6144" 00:22:24.658 } 00:22:24.658 } 00:22:24.658 ]' 00:22:24.658 10:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:24.658 10:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:24.658 10:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:24.917 10:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:24.917 10:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:24.917 10:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.917 10:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.917 10:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.917 10:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmVmOTA3YWVmYzBhYTEyYjUzY2E1YTY4N2FkYzE2Y2ZmYzdmNmY2ODk0ZTQxMmVlNzI0NzdiYmQ4YmI0NWNkM2xcN3c=: 00:22:25.485 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.485 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:25.485 10:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.486 10:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.486 10:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.486 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:25.486 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:25.486 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:25.486 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:25.745 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:22:25.745 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:25.745 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:25.745 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:25.745 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:25.745 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.745 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.745 10:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.745 10:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.745 10:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.745 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.745 10:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.313 00:22:26.313 10:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:26.313 10:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:26.313 10:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.573 10:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.573 10:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.573 10:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.573 10:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.573 10:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.573 10:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:26.573 { 00:22:26.573 "cntlid": 137, 00:22:26.573 "qid": 0, 00:22:26.573 "state": "enabled", 00:22:26.573 "thread": "nvmf_tgt_poll_group_000", 00:22:26.573 "listen_address": { 00:22:26.573 "trtype": "TCP", 00:22:26.573 "adrfam": "IPv4", 00:22:26.573 "traddr": "10.0.0.2", 00:22:26.573 "trsvcid": "4420" 00:22:26.573 }, 00:22:26.573 "peer_address": { 00:22:26.573 "trtype": "TCP", 00:22:26.573 "adrfam": "IPv4", 00:22:26.573 "traddr": "10.0.0.1", 00:22:26.573 "trsvcid": "59572" 00:22:26.573 }, 00:22:26.573 "auth": { 00:22:26.573 "state": "completed", 00:22:26.573 "digest": "sha512", 00:22:26.573 "dhgroup": "ffdhe8192" 00:22:26.573 } 00:22:26.573 } 00:22:26.573 ]' 00:22:26.573 10:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:26.573 10:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:26.573 10:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:26.573 10:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:26.573 10:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:26.573 10:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.573 10:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.573 10:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.832 10:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjE4YWMxNzZmOWVhNDdmMGIyNzA4NmIzZjY1MGIzZDVhMWUwZjU2ODVkMzc1NTNhiq/5aw==: --dhchap-ctrl-secret DHHC-1:03:ODgxOWM2M2JkYjJiOTA5N2ViZmNlZGJhOTllNzU5ZTk4ZGRiNzQ3NGI1M2JlYzY4MGIwNDM2NzA4NjU4ZDYwOLHYE6U=: 00:22:27.399 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.399 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:27.399 10:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.399 10:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.399 10:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.399 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:27.399 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:27.399 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:27.658 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:22:27.658 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:27.658 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:27.658 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:27.658 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:27.658 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.658 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.658 10:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.658 10:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.658 10:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.658 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.658 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.917 00:22:27.917 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:27.917 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:27.917 10:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.200 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.200 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.200 10:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.200 10:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.200 10:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.200 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:28.200 { 00:22:28.200 "cntlid": 139, 00:22:28.200 "qid": 0, 00:22:28.200 "state": "enabled", 00:22:28.200 "thread": "nvmf_tgt_poll_group_000", 00:22:28.200 "listen_address": { 00:22:28.200 "trtype": "TCP", 00:22:28.200 "adrfam": "IPv4", 00:22:28.200 "traddr": "10.0.0.2", 00:22:28.200 "trsvcid": "4420" 00:22:28.200 }, 00:22:28.200 "peer_address": { 00:22:28.200 "trtype": "TCP", 00:22:28.200 "adrfam": "IPv4", 00:22:28.200 "traddr": "10.0.0.1", 00:22:28.200 "trsvcid": "59606" 00:22:28.200 }, 00:22:28.200 "auth": { 00:22:28.200 "state": "completed", 00:22:28.200 "digest": "sha512", 00:22:28.200 "dhgroup": "ffdhe8192" 00:22:28.200 } 00:22:28.200 } 00:22:28.200 ]' 00:22:28.200 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:28.200 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:28.200 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:28.200 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:28.200 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:28.459 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.459 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.459 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.459 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjRiNjRjN2I5MDQwYjhkNTFkYTQ2ZGVkZWU4MzJjMDUr2Uou: --dhchap-ctrl-secret DHHC-1:02:MzA4NGQyMDY4OTY4NTM0MmRkNTBhYWExNTljZDhiNjczMDc2NjEwOWM4YzRlNGYxv2n4bA==: 00:22:29.028 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.028 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:29.028 10:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.028 10:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.028 10:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.028 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:29.028 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:29.028 10:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:29.287 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:22:29.287 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:29.287 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:29.287 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:29.287 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:29.287 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:29.287 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:29.287 10:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.287 10:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.287 10:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.287 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:29.287 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:29.856 00:22:29.856 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:29.856 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:29.856 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.856 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.856 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.856 10:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.856 10:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.116 10:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.116 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:30.116 { 00:22:30.116 "cntlid": 141, 00:22:30.116 "qid": 0, 00:22:30.116 "state": "enabled", 00:22:30.116 "thread": "nvmf_tgt_poll_group_000", 00:22:30.116 "listen_address": { 00:22:30.116 "trtype": "TCP", 00:22:30.116 "adrfam": "IPv4", 00:22:30.116 "traddr": "10.0.0.2", 00:22:30.116 "trsvcid": "4420" 00:22:30.116 }, 00:22:30.116 "peer_address": { 00:22:30.116 "trtype": "TCP", 00:22:30.116 "adrfam": "IPv4", 00:22:30.116 "traddr": "10.0.0.1", 00:22:30.116 "trsvcid": "59628" 00:22:30.116 }, 00:22:30.116 "auth": { 00:22:30.116 "state": "completed", 00:22:30.116 "digest": "sha512", 00:22:30.116 "dhgroup": "ffdhe8192" 00:22:30.116 } 00:22:30.116 } 00:22:30.116 ]' 00:22:30.116 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:30.116 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:30.116 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:30.116 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:30.116 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:30.116 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.116 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.116 10:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.375 10:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyYTNiYjZkOTg4OTBmOGI2NjU3OWFhM2ZmNmQ5OTUzZGE1N2NmNjJjYjc0MDhk96Cs+Q==: --dhchap-ctrl-secret DHHC-1:01:Y2YxM2VmZTk5MTIyYWUxMDliYjRlODcwYmYzYTc3OGb2bxF+: 00:22:30.943 10:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.943 10:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:30.943 10:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.943 10:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.943 10:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.944 10:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:30.944 10:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:30.944 10:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:30.944 10:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:22:30.944 10:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:30.944 10:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:30.944 10:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:30.944 10:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:30.944 10:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.944 10:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:30.944 10:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.944 10:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.944 10:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.944 10:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:30.944 10:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:31.512 00:22:31.512 10:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:31.512 10:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:31.512 10:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.771 10:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.771 10:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.771 10:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.771 10:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.771 10:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.771 10:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:31.771 { 00:22:31.771 "cntlid": 143, 00:22:31.771 "qid": 0, 00:22:31.771 "state": "enabled", 00:22:31.771 "thread": "nvmf_tgt_poll_group_000", 00:22:31.771 "listen_address": { 00:22:31.771 "trtype": "TCP", 00:22:31.771 "adrfam": "IPv4", 00:22:31.771 "traddr": "10.0.0.2", 00:22:31.771 "trsvcid": "4420" 00:22:31.771 }, 00:22:31.771 "peer_address": { 00:22:31.771 "trtype": "TCP", 00:22:31.771 "adrfam": "IPv4", 00:22:31.771 "traddr": "10.0.0.1", 00:22:31.771 "trsvcid": "59642" 00:22:31.771 }, 00:22:31.771 "auth": { 00:22:31.771 "state": "completed", 00:22:31.771 "digest": "sha512", 00:22:31.771 "dhgroup": "ffdhe8192" 00:22:31.771 } 00:22:31.771 } 00:22:31.771 ]' 00:22:31.771 10:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:31.771 10:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:31.771 10:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:31.771 10:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:31.771 10:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:31.771 10:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.771 10:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.771 10:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.030 10:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmVmOTA3YWVmYzBhYTEyYjUzY2E1YTY4N2FkYzE2Y2ZmYzdmNmY2ODk0ZTQxMmVlNzI0NzdiYmQ4YmI0NWNkM2xcN3c=: 00:22:32.599 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.599 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:32.599 10:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.599 10:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.599 10:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.599 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:32.599 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:32.599 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:32.599 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:32.599 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:32.599 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:32.863 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:32.863 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:32.863 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:32.863 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:32.863 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:32.863 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.863 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.863 10:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.863 10:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.863 10:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.863 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.863 10:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.121 00:22:33.121 10:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:33.121 10:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.121 10:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:33.381 10:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.381 10:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.381 10:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.381 10:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.381 10:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.381 10:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:33.381 { 00:22:33.381 "cntlid": 145, 00:22:33.381 "qid": 0, 00:22:33.381 "state": "enabled", 00:22:33.381 "thread": "nvmf_tgt_poll_group_000", 00:22:33.381 "listen_address": { 00:22:33.381 "trtype": "TCP", 00:22:33.381 "adrfam": "IPv4", 00:22:33.381 "traddr": "10.0.0.2", 00:22:33.381 "trsvcid": "4420" 00:22:33.381 }, 00:22:33.381 "peer_address": { 00:22:33.381 "trtype": "TCP", 00:22:33.381 "adrfam": "IPv4", 00:22:33.381 "traddr": "10.0.0.1", 00:22:33.381 "trsvcid": "56102" 00:22:33.381 }, 00:22:33.381 "auth": { 00:22:33.381 "state": "completed", 00:22:33.381 "digest": "sha512", 00:22:33.381 "dhgroup": "ffdhe8192" 00:22:33.381 } 00:22:33.381 } 00:22:33.381 ]' 00:22:33.381 10:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:33.381 10:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:33.381 10:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:33.671 10:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:33.671 10:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:33.671 10:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.671 10:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.671 10:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.671 10:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjE4YWMxNzZmOWVhNDdmMGIyNzA4NmIzZjY1MGIzZDVhMWUwZjU2ODVkMzc1NTNhiq/5aw==: --dhchap-ctrl-secret DHHC-1:03:ODgxOWM2M2JkYjJiOTA5N2ViZmNlZGJhOTllNzU5ZTk4ZGRiNzQ3NGI1M2JlYzY4MGIwNDM2NzA4NjU4ZDYwOLHYE6U=: 00:22:34.239 10:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.239 10:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:34.239 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.239 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.239 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.239 10:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:22:34.239 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.239 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.239 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.239 10:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:34.239 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:34.239 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:34.239 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:34.239 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:34.239 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:34.239 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:34.239 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:34.239 10:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:34.807 request: 00:22:34.808 { 00:22:34.808 "name": "nvme0", 00:22:34.808 "trtype": "tcp", 00:22:34.808 "traddr": "10.0.0.2", 00:22:34.808 "adrfam": "ipv4", 00:22:34.808 "trsvcid": "4420", 00:22:34.808 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:34.808 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:34.808 "prchk_reftag": false, 00:22:34.808 "prchk_guard": false, 00:22:34.808 "hdgst": false, 00:22:34.808 "ddgst": false, 00:22:34.808 "dhchap_key": "key2", 00:22:34.808 "method": "bdev_nvme_attach_controller", 00:22:34.808 "req_id": 1 00:22:34.808 } 00:22:34.808 Got JSON-RPC error response 00:22:34.808 response: 00:22:34.808 { 00:22:34.808 "code": -5, 00:22:34.808 "message": "Input/output error" 00:22:34.808 } 00:22:34.808 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:34.808 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:34.808 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:34.808 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:34.808 10:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:34.808 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.808 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.808 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.808 10:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.808 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.808 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.808 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.808 10:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:34.808 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:34.808 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:34.808 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:34.808 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:34.808 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:34.808 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:34.808 10:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:34.808 10:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:35.376 request: 00:22:35.376 { 00:22:35.376 "name": "nvme0", 00:22:35.376 "trtype": "tcp", 00:22:35.376 "traddr": "10.0.0.2", 00:22:35.376 "adrfam": "ipv4", 00:22:35.376 "trsvcid": "4420", 00:22:35.376 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:35.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:35.376 "prchk_reftag": false, 00:22:35.376 "prchk_guard": false, 00:22:35.376 "hdgst": false, 00:22:35.376 "ddgst": false, 00:22:35.376 "dhchap_key": "key1", 00:22:35.376 "dhchap_ctrlr_key": "ckey2", 00:22:35.376 "method": "bdev_nvme_attach_controller", 00:22:35.376 "req_id": 1 00:22:35.376 } 00:22:35.376 Got JSON-RPC error response 00:22:35.376 response: 00:22:35.376 { 00:22:35.376 "code": -5, 00:22:35.376 "message": "Input/output error" 00:22:35.376 } 00:22:35.376 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:35.376 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:35.376 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:35.376 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:35.376 10:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:35.376 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.376 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.376 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.376 10:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:22:35.376 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.376 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.376 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.376 10:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:35.376 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:35.376 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:35.376 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:35.376 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:35.376 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:35.376 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:35.376 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:35.376 10:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:35.635 request: 00:22:35.635 { 00:22:35.635 "name": "nvme0", 00:22:35.635 "trtype": "tcp", 00:22:35.635 "traddr": "10.0.0.2", 00:22:35.635 "adrfam": "ipv4", 00:22:35.635 "trsvcid": "4420", 00:22:35.635 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:35.635 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:35.635 "prchk_reftag": false, 00:22:35.635 "prchk_guard": false, 00:22:35.635 "hdgst": false, 00:22:35.635 "ddgst": false, 00:22:35.635 "dhchap_key": "key1", 00:22:35.635 "dhchap_ctrlr_key": "ckey1", 00:22:35.635 "method": "bdev_nvme_attach_controller", 00:22:35.635 "req_id": 1 00:22:35.635 } 00:22:35.635 Got JSON-RPC error response 00:22:35.635 response: 00:22:35.635 { 00:22:35.635 "code": -5, 00:22:35.635 "message": "Input/output error" 00:22:35.635 } 00:22:35.635 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:35.635 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:35.635 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:35.635 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:35.635 10:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:35.635 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.635 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.635 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.635 10:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2418525 00:22:35.635 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2418525 ']' 00:22:35.635 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2418525 00:22:35.635 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:35.635 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:35.635 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2418525 00:22:35.635 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:35.635 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:35.635 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2418525' 00:22:35.635 killing process with pid 2418525 00:22:35.635 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2418525 00:22:35.635 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2418525 00:22:35.895 10:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:35.895 10:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:35.895 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:35.895 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.895 10:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2439626 00:22:35.895 10:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2439626 00:22:35.895 10:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:35.895 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2439626 ']' 00:22:35.895 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.895 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:35.895 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.895 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:35.895 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.155 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:36.155 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:36.155 10:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:36.155 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:36.155 10:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.155 10:31:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.155 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:36.155 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2439626 00:22:36.155 10:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2439626 ']' 00:22:36.155 10:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.155 10:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:36.155 10:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.155 10:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:36.155 10:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.414 10:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:36.414 10:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:36.414 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:36.414 10:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.414 10:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.414 10:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.414 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:36.414 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:36.414 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:36.414 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:36.414 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:36.414 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.414 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:36.414 10:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.414 10:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.414 10:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.414 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:36.414 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:36.983 00:22:36.983 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:36.983 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:36.983 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.242 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.242 10:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.242 10:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.242 10:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.242 10:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.242 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:37.242 { 00:22:37.242 "cntlid": 1, 00:22:37.242 "qid": 0, 00:22:37.242 "state": "enabled", 00:22:37.242 "thread": "nvmf_tgt_poll_group_000", 00:22:37.242 "listen_address": { 00:22:37.242 "trtype": "TCP", 00:22:37.242 "adrfam": "IPv4", 00:22:37.242 "traddr": "10.0.0.2", 00:22:37.242 "trsvcid": "4420" 00:22:37.242 }, 00:22:37.242 "peer_address": { 00:22:37.242 "trtype": "TCP", 00:22:37.242 "adrfam": "IPv4", 00:22:37.242 "traddr": "10.0.0.1", 00:22:37.242 "trsvcid": "56174" 00:22:37.242 }, 00:22:37.242 "auth": { 00:22:37.242 "state": "completed", 00:22:37.242 "digest": "sha512", 00:22:37.242 "dhgroup": "ffdhe8192" 00:22:37.242 } 00:22:37.242 } 00:22:37.242 ]' 00:22:37.242 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:37.242 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:37.242 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:37.242 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:37.242 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:37.242 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.242 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.242 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.501 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmVmOTA3YWVmYzBhYTEyYjUzY2E1YTY4N2FkYzE2Y2ZmYzdmNmY2ODk0ZTQxMmVlNzI0NzdiYmQ4YmI0NWNkM2xcN3c=: 00:22:38.070 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.070 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:38.070 10:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.070 10:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.070 10:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.070 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:38.070 10:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.070 10:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.070 10:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.070 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:38.070 10:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:38.329 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:38.329 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:38.329 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:38.329 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:38.329 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.329 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:38.329 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.329 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:38.329 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:38.329 request: 00:22:38.329 { 00:22:38.329 "name": "nvme0", 00:22:38.329 "trtype": "tcp", 00:22:38.329 "traddr": "10.0.0.2", 00:22:38.329 "adrfam": "ipv4", 00:22:38.329 "trsvcid": "4420", 00:22:38.329 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:38.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:38.329 "prchk_reftag": false, 00:22:38.329 "prchk_guard": false, 00:22:38.329 "hdgst": false, 00:22:38.329 "ddgst": false, 00:22:38.329 "dhchap_key": "key3", 00:22:38.329 "method": "bdev_nvme_attach_controller", 00:22:38.329 "req_id": 1 00:22:38.329 } 00:22:38.329 Got JSON-RPC error response 00:22:38.329 response: 00:22:38.329 { 00:22:38.329 "code": -5, 00:22:38.329 "message": "Input/output error" 00:22:38.329 } 00:22:38.329 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:38.329 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:38.329 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:38.329 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:38.329 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:38.329 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:38.329 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:38.329 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:38.588 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:38.588 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:38.588 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:38.588 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:38.588 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.588 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:38.588 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.588 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:38.588 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:38.846 request: 00:22:38.846 { 00:22:38.846 "name": "nvme0", 00:22:38.846 "trtype": "tcp", 00:22:38.846 "traddr": "10.0.0.2", 00:22:38.846 "adrfam": "ipv4", 00:22:38.846 "trsvcid": "4420", 00:22:38.846 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:38.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:38.846 "prchk_reftag": false, 00:22:38.846 "prchk_guard": false, 00:22:38.846 "hdgst": false, 00:22:38.846 "ddgst": false, 00:22:38.846 "dhchap_key": "key3", 00:22:38.846 "method": "bdev_nvme_attach_controller", 00:22:38.846 "req_id": 1 00:22:38.846 } 00:22:38.846 Got JSON-RPC error response 00:22:38.846 response: 00:22:38.846 { 00:22:38.846 "code": -5, 00:22:38.846 "message": "Input/output error" 00:22:38.846 } 00:22:38.846 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:38.846 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:38.846 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:38.846 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:38.846 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:38.846 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:38.846 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:38.846 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:38.846 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:38.846 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:39.104 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:39.104 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.104 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.104 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.104 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:39.104 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.104 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.104 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.104 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:39.104 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:39.104 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:39.104 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:39.104 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:39.104 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:39.104 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:39.104 10:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:39.104 10:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:39.104 request: 00:22:39.105 { 00:22:39.105 "name": "nvme0", 00:22:39.105 "trtype": "tcp", 00:22:39.105 "traddr": "10.0.0.2", 00:22:39.105 "adrfam": "ipv4", 00:22:39.105 "trsvcid": "4420", 00:22:39.105 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:39.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:39.105 "prchk_reftag": false, 00:22:39.105 "prchk_guard": false, 00:22:39.105 "hdgst": false, 00:22:39.105 "ddgst": false, 00:22:39.105 "dhchap_key": "key0", 00:22:39.105 "dhchap_ctrlr_key": "key1", 00:22:39.105 "method": "bdev_nvme_attach_controller", 00:22:39.105 "req_id": 1 00:22:39.105 } 00:22:39.105 Got JSON-RPC error response 00:22:39.105 response: 00:22:39.105 { 00:22:39.105 "code": -5, 00:22:39.105 "message": "Input/output error" 00:22:39.105 } 00:22:39.105 10:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:39.105 10:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:39.105 10:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:39.105 10:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:39.105 10:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:39.105 10:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:39.362 00:22:39.362 10:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:39.362 10:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:39.362 10:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.620 10:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.620 10:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.620 10:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.879 10:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:39.879 10:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:39.879 10:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2418545 00:22:39.879 10:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2418545 ']' 00:22:39.879 10:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2418545 00:22:39.879 10:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:39.879 10:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:39.879 10:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2418545 00:22:39.879 10:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:39.879 10:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:39.879 10:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2418545' 00:22:39.879 killing process with pid 2418545 00:22:39.879 10:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2418545 00:22:39.879 10:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2418545 00:22:40.138 10:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:40.138 10:31:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:40.138 10:31:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:40.138 10:31:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:40.138 10:31:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:40.138 10:31:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:40.138 10:31:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:40.138 rmmod nvme_tcp 00:22:40.138 rmmod nvme_fabrics 00:22:40.138 rmmod nvme_keyring 00:22:40.138 10:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:40.138 10:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:40.138 10:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:40.138 10:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2439626 ']' 00:22:40.138 10:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2439626 00:22:40.138 10:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2439626 ']' 00:22:40.138 10:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2439626 00:22:40.138 10:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:40.138 10:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:40.138 10:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2439626 00:22:40.138 10:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:40.138 10:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:40.138 10:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2439626' 00:22:40.138 killing process with pid 2439626 00:22:40.138 10:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2439626 00:22:40.138 10:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2439626 00:22:40.397 10:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:40.397 10:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:40.397 10:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:40.397 10:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:40.397 10:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:40.397 10:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.397 10:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:40.397 10:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.932 10:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:42.932 10:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Adg /tmp/spdk.key-sha256.sxG /tmp/spdk.key-sha384.0Kv /tmp/spdk.key-sha512.EHg /tmp/spdk.key-sha512.OJ9 /tmp/spdk.key-sha384.1I0 /tmp/spdk.key-sha256.RYD '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:42.932 00:22:42.932 real 2m11.953s 00:22:42.932 user 5m3.350s 00:22:42.932 sys 0m21.124s 00:22:42.932 10:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:42.932 10:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.932 ************************************ 00:22:42.932 END TEST nvmf_auth_target 00:22:42.932 ************************************ 00:22:42.932 10:31:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:42.932 10:31:27 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:42.932 10:31:27 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:42.932 10:31:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:42.932 10:31:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:42.932 10:31:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:42.932 ************************************ 00:22:42.932 START TEST nvmf_bdevio_no_huge 00:22:42.932 ************************************ 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:42.932 * Looking for test storage... 00:22:42.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:42.932 10:31:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:48.207 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:48.207 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:48.207 Found net devices under 0000:86:00.0: cvl_0_0 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:48.207 Found net devices under 0000:86:00.1: cvl_0_1 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:48.207 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:48.208 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:48.208 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:48.208 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:48.208 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:48.208 10:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:48.208 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:48.208 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:48.208 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:48.208 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:48.466 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:48.466 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:48.466 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:48.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:48.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:22:48.466 00:22:48.466 --- 10.0.0.2 ping statistics --- 00:22:48.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.466 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:22:48.466 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:48.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:48.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:22:48.466 00:22:48.466 --- 10.0.0.1 ping statistics --- 00:22:48.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.466 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:22:48.466 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:48.466 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:48.466 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:48.466 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.466 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:48.466 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:48.466 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.466 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:48.466 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:48.466 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:48.466 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:48.466 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:48.466 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:48.466 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2443832 00:22:48.466 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2443832 00:22:48.466 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:48.466 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 2443832 ']' 00:22:48.466 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.466 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:48.466 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.466 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:48.466 10:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:48.466 [2024-07-14 10:31:33.322834] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:48.466 [2024-07-14 10:31:33.322886] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:48.466 [2024-07-14 10:31:33.385962] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:48.724 [2024-07-14 10:31:33.452014] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.724 [2024-07-14 10:31:33.452049] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.724 [2024-07-14 10:31:33.452056] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:48.724 [2024-07-14 10:31:33.452062] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:48.724 [2024-07-14 10:31:33.452068] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.724 [2024-07-14 10:31:33.452131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:48.724 [2024-07-14 10:31:33.452254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:22:48.724 [2024-07-14 10:31:33.452361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:48.724 [2024-07-14 10:31:33.452363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:49.291 [2024-07-14 10:31:34.180145] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:49.291 Malloc0 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:49.291 [2024-07-14 10:31:34.224392] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:49.291 { 00:22:49.291 "params": { 00:22:49.291 "name": "Nvme$subsystem", 00:22:49.291 "trtype": "$TEST_TRANSPORT", 00:22:49.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:49.291 "adrfam": "ipv4", 00:22:49.291 "trsvcid": "$NVMF_PORT", 00:22:49.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:49.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:49.291 "hdgst": ${hdgst:-false}, 00:22:49.291 "ddgst": ${ddgst:-false} 00:22:49.291 }, 00:22:49.291 "method": "bdev_nvme_attach_controller" 00:22:49.291 } 00:22:49.291 EOF 00:22:49.291 )") 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:49.291 10:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:49.291 "params": { 00:22:49.291 "name": "Nvme1", 00:22:49.291 "trtype": "tcp", 00:22:49.291 "traddr": "10.0.0.2", 00:22:49.291 "adrfam": "ipv4", 00:22:49.291 "trsvcid": "4420", 00:22:49.291 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.291 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:49.291 "hdgst": false, 00:22:49.291 "ddgst": false 00:22:49.291 }, 00:22:49.291 "method": "bdev_nvme_attach_controller" 00:22:49.291 }' 00:22:49.550 [2024-07-14 10:31:34.273276] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:49.550 [2024-07-14 10:31:34.273325] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2443916 ] 00:22:49.550 [2024-07-14 10:31:34.340264] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:49.550 [2024-07-14 10:31:34.406445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.550 [2024-07-14 10:31:34.406554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.550 [2024-07-14 10:31:34.406555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.809 I/O targets: 00:22:49.809 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:49.809 00:22:49.809 00:22:49.809 CUnit - A unit testing framework for C - Version 2.1-3 00:22:49.809 http://cunit.sourceforge.net/ 00:22:49.809 00:22:49.809 00:22:49.809 Suite: bdevio tests on: Nvme1n1 00:22:49.809 Test: blockdev write read block ...passed 00:22:49.809 Test: blockdev write zeroes read block ...passed 00:22:49.809 Test: blockdev write zeroes read no split ...passed 00:22:49.809 Test: blockdev write zeroes read split ...passed 00:22:50.067 Test: blockdev write zeroes read split partial ...passed 00:22:50.067 Test: blockdev reset ...[2024-07-14 10:31:34.830044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:50.067 [2024-07-14 10:31:34.830108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b1e80 (9): Bad file descriptor 00:22:50.068 [2024-07-14 10:31:34.932822] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:50.068 passed 00:22:50.068 Test: blockdev write read 8 blocks ...passed 00:22:50.068 Test: blockdev write read size > 128k ...passed 00:22:50.068 Test: blockdev write read invalid size ...passed 00:22:50.068 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:50.068 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:50.068 Test: blockdev write read max offset ...passed 00:22:50.327 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:50.327 Test: blockdev writev readv 8 blocks ...passed 00:22:50.327 Test: blockdev writev readv 30 x 1block ...passed 00:22:50.327 Test: blockdev writev readv block ...passed 00:22:50.327 Test: blockdev writev readv size > 128k ...passed 00:22:50.327 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:50.327 Test: blockdev comparev and writev ...[2024-07-14 10:31:35.147977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:50.327 [2024-07-14 10:31:35.148004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.327 [2024-07-14 10:31:35.148018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:50.327 [2024-07-14 10:31:35.148026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.327 [2024-07-14 10:31:35.148270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:50.327 [2024-07-14 10:31:35.148281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:50.327 [2024-07-14 10:31:35.148293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:50.327 [2024-07-14 10:31:35.148300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:50.327 [2024-07-14 10:31:35.148534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:50.327 [2024-07-14 10:31:35.148543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:50.327 [2024-07-14 10:31:35.148554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:50.327 [2024-07-14 10:31:35.148561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:50.327 [2024-07-14 10:31:35.148789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:50.327 [2024-07-14 10:31:35.148799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:50.327 [2024-07-14 10:31:35.148813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:50.327 [2024-07-14 10:31:35.148820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:50.327 passed 00:22:50.327 Test: blockdev nvme passthru rw ...passed 00:22:50.327 Test: blockdev nvme passthru vendor specific ...[2024-07-14 10:31:35.231600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:50.327 [2024-07-14 10:31:35.231614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:50.327 [2024-07-14 10:31:35.231728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:50.327 [2024-07-14 10:31:35.231737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:50.327 [2024-07-14 10:31:35.231846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:50.327 [2024-07-14 10:31:35.231855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:50.327 [2024-07-14 10:31:35.231972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:50.327 [2024-07-14 10:31:35.231981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:50.327 passed 00:22:50.327 Test: blockdev nvme admin passthru ...passed 00:22:50.327 Test: blockdev copy ...passed 00:22:50.327 00:22:50.327 Run Summary: Type Total Ran Passed Failed Inactive 00:22:50.327 suites 1 1 n/a 0 0 00:22:50.327 tests 23 23 23 0 0 00:22:50.327 asserts 152 152 152 0 n/a 00:22:50.327 00:22:50.327 Elapsed time = 1.265 seconds 00:22:50.586 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:50.586 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.586 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:50.845 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.845 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:50.845 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:50.845 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:50.845 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:50.845 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:50.845 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:50.845 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:50.845 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:50.845 rmmod nvme_tcp 00:22:50.845 rmmod nvme_fabrics 00:22:50.845 rmmod nvme_keyring 00:22:50.845 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:50.845 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:50.845 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:50.845 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2443832 ']' 00:22:50.845 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2443832 00:22:50.845 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 2443832 ']' 00:22:50.845 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 2443832 00:22:50.845 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:22:50.845 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:50.845 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2443832 00:22:50.845 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:22:50.845 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:22:50.845 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2443832' 00:22:50.845 killing process with pid 2443832 00:22:50.845 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 2443832 00:22:50.845 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 2443832 00:22:51.104 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:51.104 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:51.104 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:51.104 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:51.104 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:51.104 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.104 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:51.104 10:31:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.082 10:31:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:53.082 00:22:53.082 real 0m10.637s 00:22:53.082 user 0m13.863s 00:22:53.082 sys 0m5.210s 00:22:53.082 10:31:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:53.082 10:31:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:53.082 ************************************ 00:22:53.082 END TEST nvmf_bdevio_no_huge 00:22:53.082 ************************************ 00:22:53.343 10:31:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:53.343 10:31:38 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:53.343 10:31:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:53.343 10:31:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:53.343 10:31:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:53.343 ************************************ 00:22:53.343 START TEST nvmf_tls 00:22:53.343 ************************************ 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:53.343 * Looking for test storage... 00:22:53.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:53.343 10:31:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:59.913 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:59.913 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:59.914 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:59.914 Found net devices under 0000:86:00.0: cvl_0_0 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:59.914 Found net devices under 0000:86:00.1: cvl_0_1 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:59.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:59.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:22:59.914 00:22:59.914 --- 10.0.0.2 ping statistics --- 00:22:59.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.914 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:59.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:59.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:22:59.914 00:22:59.914 --- 10.0.0.1 ping statistics --- 00:22:59.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.914 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2447661 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2447661 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2447661 ']' 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:59.914 10:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.914 [2024-07-14 10:31:44.038545] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:59.914 [2024-07-14 10:31:44.038587] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.914 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.914 [2024-07-14 10:31:44.110098] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.914 [2024-07-14 10:31:44.149477] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:59.914 [2024-07-14 10:31:44.149516] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:59.914 [2024-07-14 10:31:44.149523] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:59.914 [2024-07-14 10:31:44.149529] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:59.914 [2024-07-14 10:31:44.149534] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:59.914 [2024-07-14 10:31:44.149551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.914 10:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:59.914 10:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:59.914 10:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:59.914 10:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:59.914 10:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.914 10:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:59.914 10:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:59.914 10:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:59.914 true 00:22:59.914 10:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:59.914 10:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:59.914 10:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:59.914 10:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:59.915 10:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:59.915 10:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:59.915 10:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:23:00.173 10:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:23:00.173 10:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:23:00.173 10:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:00.174 10:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:00.174 10:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:23:00.431 10:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:23:00.431 10:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:23:00.432 10:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:23:00.432 10:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:00.432 10:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:23:00.432 10:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:23:00.432 10:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:00.690 10:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:00.690 10:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:23:00.949 10:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:23:00.949 10:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:23:00.949 10:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:00.949 10:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:00.949 10:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:23:01.208 10:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:23:01.208 10:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:23:01.208 10:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:01.208 10:31:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:01.208 10:31:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:01.208 10:31:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:01.208 10:31:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:01.208 10:31:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:01.208 10:31:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:01.208 10:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:01.208 10:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:01.208 10:31:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:01.208 10:31:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:01.208 10:31:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:01.208 10:31:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:23:01.208 10:31:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:01.208 10:31:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:01.208 10:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:01.208 10:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:23:01.208 10:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.TP7ggB6sPb 00:23:01.208 10:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:01.208 10:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.Msqv4IhFSI 00:23:01.208 10:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:01.208 10:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:01.208 10:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.TP7ggB6sPb 00:23:01.208 10:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Msqv4IhFSI 00:23:01.208 10:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:01.467 10:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:01.724 10:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.TP7ggB6sPb 00:23:01.724 10:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.TP7ggB6sPb 00:23:01.724 10:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:01.982 [2024-07-14 10:31:46.720418] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.982 10:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:01.982 10:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:02.240 [2024-07-14 10:31:47.057274] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:02.240 [2024-07-14 10:31:47.057472] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.240 10:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:02.499 malloc0 00:23:02.499 10:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:02.499 10:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TP7ggB6sPb 00:23:02.757 [2024-07-14 10:31:47.562782] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:02.757 10:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.TP7ggB6sPb 00:23:02.757 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.732 Initializing NVMe Controllers 00:23:12.732 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:12.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:12.732 Initialization complete. Launching workers. 00:23:12.732 ======================================================== 00:23:12.732 Latency(us) 00:23:12.732 Device Information : IOPS MiB/s Average min max 00:23:12.732 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16590.78 64.81 3857.98 801.23 7128.41 00:23:12.732 ======================================================== 00:23:12.732 Total : 16590.78 64.81 3857.98 801.23 7128.41 00:23:12.732 00:23:12.732 10:31:57 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TP7ggB6sPb 00:23:12.732 10:31:57 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:12.732 10:31:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:12.732 10:31:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:12.732 10:31:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.TP7ggB6sPb' 00:23:12.732 10:31:57 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:12.732 10:31:57 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2450002 00:23:12.732 10:31:57 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:12.732 10:31:57 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:12.732 10:31:57 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2450002 /var/tmp/bdevperf.sock 00:23:12.732 10:31:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2450002 ']' 00:23:12.732 10:31:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:12.732 10:31:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:12.732 10:31:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:12.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:12.732 10:31:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:12.732 10:31:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.991 [2024-07-14 10:31:57.727415] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:12.991 [2024-07-14 10:31:57.727463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2450002 ] 00:23:12.991 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.991 [2024-07-14 10:31:57.795044] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.991 [2024-07-14 10:31:57.835254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.991 10:31:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:12.991 10:31:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:12.991 10:31:57 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TP7ggB6sPb 00:23:13.250 [2024-07-14 10:31:58.070511] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:13.250 [2024-07-14 10:31:58.070581] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:13.250 TLSTESTn1 00:23:13.250 10:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:13.509 Running I/O for 10 seconds... 00:23:23.489 00:23:23.489 Latency(us) 00:23:23.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.489 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:23.489 Verification LBA range: start 0x0 length 0x2000 00:23:23.489 TLSTESTn1 : 10.01 5525.31 21.58 0.00 0.00 23129.88 6325.65 27126.21 00:23:23.489 =================================================================================================================== 00:23:23.489 Total : 5525.31 21.58 0.00 0.00 23129.88 6325.65 27126.21 00:23:23.489 0 00:23:23.489 10:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:23.489 10:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2450002 00:23:23.489 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2450002 ']' 00:23:23.489 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2450002 00:23:23.489 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:23.489 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:23.489 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2450002 00:23:23.489 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:23.489 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:23.489 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2450002' 00:23:23.489 killing process with pid 2450002 00:23:23.489 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2450002 00:23:23.489 Received shutdown signal, test time was about 10.000000 seconds 00:23:23.489 00:23:23.489 Latency(us) 00:23:23.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.489 =================================================================================================================== 00:23:23.489 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:23.489 [2024-07-14 10:32:08.362091] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:23.489 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2450002 00:23:23.749 10:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Msqv4IhFSI 00:23:23.749 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:23.749 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Msqv4IhFSI 00:23:23.749 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:23.749 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:23.749 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:23.749 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:23.749 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Msqv4IhFSI 00:23:23.749 10:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:23.749 10:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:23.749 10:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:23.749 10:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Msqv4IhFSI' 00:23:23.749 10:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:23.749 10:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2451611 00:23:23.749 10:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:23.749 10:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:23.749 10:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2451611 /var/tmp/bdevperf.sock 00:23:23.749 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2451611 ']' 00:23:23.749 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:23.749 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:23.749 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:23.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:23.749 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:23.749 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.749 [2024-07-14 10:32:08.582020] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:23.749 [2024-07-14 10:32:08.582069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2451611 ] 00:23:23.749 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.749 [2024-07-14 10:32:08.641344] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.749 [2024-07-14 10:32:08.677999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.061 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:24.061 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:24.061 10:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Msqv4IhFSI 00:23:24.061 [2024-07-14 10:32:08.929567] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:24.061 [2024-07-14 10:32:08.929645] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:24.061 [2024-07-14 10:32:08.940943] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:24.061 [2024-07-14 10:32:08.941763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8bff0 (107): Transport endpoint is not connected 00:23:24.061 [2024-07-14 10:32:08.942756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8bff0 (9): Bad file descriptor 00:23:24.061 [2024-07-14 10:32:08.943758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:24.061 [2024-07-14 10:32:08.943767] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:24.061 [2024-07-14 10:32:08.943776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:24.061 request: 00:23:24.061 { 00:23:24.061 "name": "TLSTEST", 00:23:24.061 "trtype": "tcp", 00:23:24.061 "traddr": "10.0.0.2", 00:23:24.061 "adrfam": "ipv4", 00:23:24.061 "trsvcid": "4420", 00:23:24.061 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.061 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:24.061 "prchk_reftag": false, 00:23:24.061 "prchk_guard": false, 00:23:24.061 "hdgst": false, 00:23:24.061 "ddgst": false, 00:23:24.061 "psk": "/tmp/tmp.Msqv4IhFSI", 00:23:24.061 "method": "bdev_nvme_attach_controller", 00:23:24.061 "req_id": 1 00:23:24.061 } 00:23:24.061 Got JSON-RPC error response 00:23:24.061 response: 00:23:24.061 { 00:23:24.061 "code": -5, 00:23:24.061 "message": "Input/output error" 00:23:24.061 } 00:23:24.061 10:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2451611 00:23:24.061 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2451611 ']' 00:23:24.061 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2451611 00:23:24.061 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:24.061 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:24.061 10:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2451611 00:23:24.061 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:24.061 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:24.061 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2451611' 00:23:24.061 killing process with pid 2451611 00:23:24.061 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2451611 00:23:24.061 Received shutdown signal, test time was about 10.000000 seconds 00:23:24.061 00:23:24.061 Latency(us) 00:23:24.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.061 =================================================================================================================== 00:23:24.061 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:24.061 [2024-07-14 10:32:09.017390] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:24.061 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2451611 00:23:24.320 10:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:24.320 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:24.320 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:24.320 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:24.320 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:24.320 10:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TP7ggB6sPb 00:23:24.320 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:24.320 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TP7ggB6sPb 00:23:24.320 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:24.320 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.320 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:24.320 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.320 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TP7ggB6sPb 00:23:24.320 10:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:24.320 10:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:24.320 10:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:24.320 10:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.TP7ggB6sPb' 00:23:24.320 10:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:24.320 10:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2451845 00:23:24.320 10:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:24.320 10:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:24.320 10:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2451845 /var/tmp/bdevperf.sock 00:23:24.320 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2451845 ']' 00:23:24.320 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:24.320 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:24.320 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:24.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:24.320 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:24.320 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.320 [2024-07-14 10:32:09.230281] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:24.320 [2024-07-14 10:32:09.230328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2451845 ] 00:23:24.320 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.320 [2024-07-14 10:32:09.297680] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.579 [2024-07-14 10:32:09.337604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.579 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:24.579 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:24.579 10:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.TP7ggB6sPb 00:23:24.838 [2024-07-14 10:32:09.576605] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:24.838 [2024-07-14 10:32:09.576692] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:24.838 [2024-07-14 10:32:09.582531] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:24.839 [2024-07-14 10:32:09.582553] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:24.839 [2024-07-14 10:32:09.582592] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:24.839 [2024-07-14 10:32:09.582891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x149dff0 (107): Transport endpoint is not connected 00:23:24.839 [2024-07-14 10:32:09.583884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x149dff0 (9): Bad file descriptor 00:23:24.839 [2024-07-14 10:32:09.584885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:24.839 [2024-07-14 10:32:09.584895] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:24.839 [2024-07-14 10:32:09.584904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:24.839 request: 00:23:24.839 { 00:23:24.839 "name": "TLSTEST", 00:23:24.839 "trtype": "tcp", 00:23:24.839 "traddr": "10.0.0.2", 00:23:24.839 "adrfam": "ipv4", 00:23:24.839 "trsvcid": "4420", 00:23:24.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.839 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:24.839 "prchk_reftag": false, 00:23:24.839 "prchk_guard": false, 00:23:24.839 "hdgst": false, 00:23:24.839 "ddgst": false, 00:23:24.839 "psk": "/tmp/tmp.TP7ggB6sPb", 00:23:24.839 "method": "bdev_nvme_attach_controller", 00:23:24.839 "req_id": 1 00:23:24.839 } 00:23:24.839 Got JSON-RPC error response 00:23:24.839 response: 00:23:24.839 { 00:23:24.839 "code": -5, 00:23:24.839 "message": "Input/output error" 00:23:24.839 } 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2451845 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2451845 ']' 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2451845 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2451845 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2451845' 00:23:24.839 killing process with pid 2451845 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2451845 00:23:24.839 Received shutdown signal, test time was about 10.000000 seconds 00:23:24.839 00:23:24.839 Latency(us) 00:23:24.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.839 =================================================================================================================== 00:23:24.839 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:24.839 [2024-07-14 10:32:09.646483] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2451845 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TP7ggB6sPb 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TP7ggB6sPb 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TP7ggB6sPb 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.TP7ggB6sPb' 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2451863 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2451863 /var/tmp/bdevperf.sock 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2451863 ']' 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:24.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:24.839 10:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.099 [2024-07-14 10:32:09.857893] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:25.099 [2024-07-14 10:32:09.857941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2451863 ] 00:23:25.099 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.099 [2024-07-14 10:32:09.920746] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.099 [2024-07-14 10:32:09.961403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.099 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:25.099 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:25.099 10:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TP7ggB6sPb 00:23:25.358 [2024-07-14 10:32:10.211263] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:25.358 [2024-07-14 10:32:10.211344] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:25.358 [2024-07-14 10:32:10.215652] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:25.358 [2024-07-14 10:32:10.215675] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:25.358 [2024-07-14 10:32:10.215701] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:25.358 [2024-07-14 10:32:10.216440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a2ff0 (107): Transport endpoint is not connected 00:23:25.358 [2024-07-14 10:32:10.217432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a2ff0 (9): Bad file descriptor 00:23:25.358 [2024-07-14 10:32:10.218433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:25.358 [2024-07-14 10:32:10.218442] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:25.358 [2024-07-14 10:32:10.218450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:25.358 request: 00:23:25.358 { 00:23:25.358 "name": "TLSTEST", 00:23:25.358 "trtype": "tcp", 00:23:25.358 "traddr": "10.0.0.2", 00:23:25.358 "adrfam": "ipv4", 00:23:25.358 "trsvcid": "4420", 00:23:25.358 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:25.358 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:25.358 "prchk_reftag": false, 00:23:25.358 "prchk_guard": false, 00:23:25.358 "hdgst": false, 00:23:25.358 "ddgst": false, 00:23:25.358 "psk": "/tmp/tmp.TP7ggB6sPb", 00:23:25.358 "method": "bdev_nvme_attach_controller", 00:23:25.358 "req_id": 1 00:23:25.358 } 00:23:25.358 Got JSON-RPC error response 00:23:25.358 response: 00:23:25.358 { 00:23:25.358 "code": -5, 00:23:25.358 "message": "Input/output error" 00:23:25.358 } 00:23:25.358 10:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2451863 00:23:25.358 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2451863 ']' 00:23:25.358 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2451863 00:23:25.358 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:25.358 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:25.359 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2451863 00:23:25.359 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:25.359 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:25.359 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2451863' 00:23:25.359 killing process with pid 2451863 00:23:25.359 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2451863 00:23:25.359 Received shutdown signal, test time was about 10.000000 seconds 00:23:25.359 00:23:25.359 Latency(us) 00:23:25.359 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.359 =================================================================================================================== 00:23:25.359 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:25.359 [2024-07-14 10:32:10.291365] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:25.359 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2451863 00:23:25.618 10:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:25.618 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:25.618 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:25.618 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:25.618 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:25.618 10:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:25.618 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:25.618 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:25.618 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:25.618 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:25.618 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:25.618 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:25.618 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:25.618 10:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:25.618 10:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:25.618 10:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:25.618 10:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:25.618 10:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:25.618 10:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2452018 00:23:25.618 10:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:25.618 10:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:25.618 10:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2452018 /var/tmp/bdevperf.sock 00:23:25.618 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2452018 ']' 00:23:25.618 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:25.618 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:25.618 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:25.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:25.618 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:25.618 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.618 [2024-07-14 10:32:10.505387] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:25.618 [2024-07-14 10:32:10.505434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2452018 ] 00:23:25.618 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.618 [2024-07-14 10:32:10.573483] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.877 [2024-07-14 10:32:10.614181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.877 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:25.877 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:25.877 10:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:25.877 [2024-07-14 10:32:10.855769] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:25.877 [2024-07-14 10:32:10.857508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18365e0 (9): Bad file descriptor 00:23:25.877 [2024-07-14 10:32:10.858507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:25.877 [2024-07-14 10:32:10.858516] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:25.877 [2024-07-14 10:32:10.858525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:26.135 request: 00:23:26.135 { 00:23:26.135 "name": "TLSTEST", 00:23:26.135 "trtype": "tcp", 00:23:26.135 "traddr": "10.0.0.2", 00:23:26.135 "adrfam": "ipv4", 00:23:26.135 "trsvcid": "4420", 00:23:26.135 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.135 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:26.135 "prchk_reftag": false, 00:23:26.135 "prchk_guard": false, 00:23:26.135 "hdgst": false, 00:23:26.135 "ddgst": false, 00:23:26.135 "method": "bdev_nvme_attach_controller", 00:23:26.135 "req_id": 1 00:23:26.135 } 00:23:26.135 Got JSON-RPC error response 00:23:26.135 response: 00:23:26.135 { 00:23:26.135 "code": -5, 00:23:26.135 "message": "Input/output error" 00:23:26.135 } 00:23:26.135 10:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2452018 00:23:26.135 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2452018 ']' 00:23:26.135 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2452018 00:23:26.135 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:26.135 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:26.135 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2452018 00:23:26.135 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:26.135 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:26.135 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2452018' 00:23:26.135 killing process with pid 2452018 00:23:26.135 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2452018 00:23:26.135 Received shutdown signal, test time was about 10.000000 seconds 00:23:26.135 00:23:26.135 Latency(us) 00:23:26.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.135 =================================================================================================================== 00:23:26.135 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:26.135 10:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2452018 00:23:26.135 10:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:26.135 10:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:26.135 10:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:26.135 10:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:26.135 10:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:26.135 10:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2447661 00:23:26.135 10:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2447661 ']' 00:23:26.135 10:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2447661 00:23:26.135 10:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:26.135 10:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:26.135 10:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2447661 00:23:26.393 10:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:26.393 10:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:26.393 10:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2447661' 00:23:26.393 killing process with pid 2447661 00:23:26.393 10:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2447661 00:23:26.393 [2024-07-14 10:32:11.140977] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:26.393 10:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2447661 00:23:26.393 10:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:26.393 10:32:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:26.393 10:32:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:26.393 10:32:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:26.393 10:32:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:26.393 10:32:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:23:26.393 10:32:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:26.393 10:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:26.393 10:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:23:26.393 10:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.v8w5xsNwG6 00:23:26.393 10:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:26.393 10:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.v8w5xsNwG6 00:23:26.393 10:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:23:26.393 10:32:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:26.393 10:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:26.393 10:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.650 10:32:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2452117 00:23:26.650 10:32:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2452117 00:23:26.650 10:32:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:26.650 10:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2452117 ']' 00:23:26.650 10:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.650 10:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:26.650 10:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.650 10:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:26.650 10:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.650 [2024-07-14 10:32:11.427196] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:26.651 [2024-07-14 10:32:11.427249] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.651 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.651 [2024-07-14 10:32:11.494596] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.651 [2024-07-14 10:32:11.533636] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:26.651 [2024-07-14 10:32:11.533674] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:26.651 [2024-07-14 10:32:11.533681] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:26.651 [2024-07-14 10:32:11.533687] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:26.651 [2024-07-14 10:32:11.533693] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:26.651 [2024-07-14 10:32:11.533709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.651 10:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:26.651 10:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:26.651 10:32:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:26.651 10:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:26.651 10:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.909 10:32:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.909 10:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.v8w5xsNwG6 00:23:26.909 10:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.v8w5xsNwG6 00:23:26.909 10:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:26.909 [2024-07-14 10:32:11.813648] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.909 10:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:27.168 10:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:27.427 [2024-07-14 10:32:12.158545] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:27.427 [2024-07-14 10:32:12.158727] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.427 10:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:27.427 malloc0 00:23:27.427 10:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:27.686 10:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.v8w5xsNwG6 00:23:27.686 [2024-07-14 10:32:12.652009] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:27.686 10:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.v8w5xsNwG6 00:23:27.686 10:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:27.686 10:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:27.686 10:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:27.686 10:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.v8w5xsNwG6' 00:23:27.686 10:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:27.946 10:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2452371 00:23:27.946 10:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:27.946 10:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:27.946 10:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2452371 /var/tmp/bdevperf.sock 00:23:27.946 10:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2452371 ']' 00:23:27.946 10:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.946 10:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:27.946 10:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.946 10:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:27.946 10:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.946 [2024-07-14 10:32:12.696134] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:27.946 [2024-07-14 10:32:12.696180] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2452371 ] 00:23:27.946 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.946 [2024-07-14 10:32:12.763538] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.946 [2024-07-14 10:32:12.802396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.946 10:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:27.946 10:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:27.946 10:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.v8w5xsNwG6 00:23:28.205 [2024-07-14 10:32:13.053690] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:28.205 [2024-07-14 10:32:13.053761] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:28.205 TLSTESTn1 00:23:28.205 10:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:28.462 Running I/O for 10 seconds... 00:23:38.568 00:23:38.568 Latency(us) 00:23:38.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.568 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:38.568 Verification LBA range: start 0x0 length 0x2000 00:23:38.568 TLSTESTn1 : 10.01 5497.02 21.47 0.00 0.00 23250.50 4957.94 27468.13 00:23:38.568 =================================================================================================================== 00:23:38.568 Total : 5497.02 21.47 0.00 0.00 23250.50 4957.94 27468.13 00:23:38.568 0 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2452371 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2452371 ']' 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2452371 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2452371 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2452371' 00:23:38.568 killing process with pid 2452371 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2452371 00:23:38.568 Received shutdown signal, test time was about 10.000000 seconds 00:23:38.568 00:23:38.568 Latency(us) 00:23:38.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.568 =================================================================================================================== 00:23:38.568 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:38.568 [2024-07-14 10:32:23.328515] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2452371 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.v8w5xsNwG6 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.v8w5xsNwG6 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.v8w5xsNwG6 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.v8w5xsNwG6 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.v8w5xsNwG6' 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2454193 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2454193 /var/tmp/bdevperf.sock 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2454193 ']' 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:38.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:38.568 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.568 [2024-07-14 10:32:23.549497] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:38.568 [2024-07-14 10:32:23.549547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2454193 ] 00:23:38.826 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.826 [2024-07-14 10:32:23.614453] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.826 [2024-07-14 10:32:23.653080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.826 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:38.826 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:38.826 10:32:23 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.v8w5xsNwG6 00:23:39.083 [2024-07-14 10:32:23.900908] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:39.083 [2024-07-14 10:32:23.900953] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:39.083 [2024-07-14 10:32:23.900959] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.v8w5xsNwG6 00:23:39.083 request: 00:23:39.083 { 00:23:39.083 "name": "TLSTEST", 00:23:39.083 "trtype": "tcp", 00:23:39.083 "traddr": "10.0.0.2", 00:23:39.083 "adrfam": "ipv4", 00:23:39.083 "trsvcid": "4420", 00:23:39.083 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.083 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:39.083 "prchk_reftag": false, 00:23:39.083 "prchk_guard": false, 00:23:39.083 "hdgst": false, 00:23:39.083 "ddgst": false, 00:23:39.083 "psk": "/tmp/tmp.v8w5xsNwG6", 00:23:39.083 "method": "bdev_nvme_attach_controller", 00:23:39.083 "req_id": 1 00:23:39.083 } 00:23:39.083 Got JSON-RPC error response 00:23:39.083 response: 00:23:39.083 { 00:23:39.083 "code": -1, 00:23:39.084 "message": "Operation not permitted" 00:23:39.084 } 00:23:39.084 10:32:23 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2454193 00:23:39.084 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2454193 ']' 00:23:39.084 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2454193 00:23:39.084 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:39.084 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:39.084 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2454193 00:23:39.084 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:39.084 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:39.084 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2454193' 00:23:39.084 killing process with pid 2454193 00:23:39.084 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2454193 00:23:39.084 Received shutdown signal, test time was about 10.000000 seconds 00:23:39.084 00:23:39.084 Latency(us) 00:23:39.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.084 =================================================================================================================== 00:23:39.084 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:39.084 10:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2454193 00:23:39.341 10:32:24 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:39.341 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:39.341 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:39.341 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:39.341 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:39.341 10:32:24 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2452117 00:23:39.341 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2452117 ']' 00:23:39.341 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2452117 00:23:39.341 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:39.341 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:39.341 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2452117 00:23:39.341 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:39.341 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:39.341 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2452117' 00:23:39.341 killing process with pid 2452117 00:23:39.341 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2452117 00:23:39.341 [2024-07-14 10:32:24.186946] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:39.341 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2452117 00:23:39.599 10:32:24 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:39.599 10:32:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:39.599 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:39.599 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.599 10:32:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2454222 00:23:39.599 10:32:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2454222 00:23:39.599 10:32:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:39.599 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2454222 ']' 00:23:39.599 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.599 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:39.599 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.599 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:39.599 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.599 [2024-07-14 10:32:24.418853] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:39.599 [2024-07-14 10:32:24.418898] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.599 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.599 [2024-07-14 10:32:24.489985] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.599 [2024-07-14 10:32:24.528936] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.599 [2024-07-14 10:32:24.528975] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.599 [2024-07-14 10:32:24.528982] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.599 [2024-07-14 10:32:24.528988] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.599 [2024-07-14 10:32:24.528993] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.599 [2024-07-14 10:32:24.529017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.856 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:39.856 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:39.856 10:32:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:39.856 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:39.856 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.856 10:32:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.856 10:32:24 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.v8w5xsNwG6 00:23:39.856 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:39.856 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.v8w5xsNwG6 00:23:39.856 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:39.856 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:39.856 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:39.856 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:39.856 10:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.v8w5xsNwG6 00:23:39.857 10:32:24 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.v8w5xsNwG6 00:23:39.857 10:32:24 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:39.857 [2024-07-14 10:32:24.817374] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.857 10:32:24 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:40.114 10:32:25 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:40.372 [2024-07-14 10:32:25.162261] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:40.372 [2024-07-14 10:32:25.162453] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.372 10:32:25 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:40.372 malloc0 00:23:40.630 10:32:25 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:40.630 10:32:25 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.v8w5xsNwG6 00:23:40.889 [2024-07-14 10:32:25.671828] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:40.889 [2024-07-14 10:32:25.671855] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:40.889 [2024-07-14 10:32:25.671878] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:40.889 request: 00:23:40.889 { 00:23:40.889 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.889 "host": "nqn.2016-06.io.spdk:host1", 00:23:40.889 "psk": "/tmp/tmp.v8w5xsNwG6", 00:23:40.889 "method": "nvmf_subsystem_add_host", 00:23:40.889 "req_id": 1 00:23:40.889 } 00:23:40.889 Got JSON-RPC error response 00:23:40.889 response: 00:23:40.889 { 00:23:40.889 "code": -32603, 00:23:40.889 "message": "Internal error" 00:23:40.889 } 00:23:40.889 10:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:40.889 10:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:40.889 10:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:40.889 10:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:40.889 10:32:25 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2454222 00:23:40.889 10:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2454222 ']' 00:23:40.889 10:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2454222 00:23:40.889 10:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:40.889 10:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:40.889 10:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2454222 00:23:40.889 10:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:40.889 10:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:40.889 10:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2454222' 00:23:40.889 killing process with pid 2454222 00:23:40.889 10:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2454222 00:23:40.889 10:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2454222 00:23:41.149 10:32:25 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.v8w5xsNwG6 00:23:41.149 10:32:25 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:41.149 10:32:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:41.149 10:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:41.149 10:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.149 10:32:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2454508 00:23:41.149 10:32:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2454508 00:23:41.149 10:32:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:41.149 10:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2454508 ']' 00:23:41.149 10:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.149 10:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:41.149 10:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.149 10:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:41.149 10:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.149 [2024-07-14 10:32:25.971971] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:41.149 [2024-07-14 10:32:25.972018] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.149 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.149 [2024-07-14 10:32:26.044146] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.149 [2024-07-14 10:32:26.083654] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.149 [2024-07-14 10:32:26.083693] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.149 [2024-07-14 10:32:26.083705] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.149 [2024-07-14 10:32:26.083711] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.149 [2024-07-14 10:32:26.083716] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.149 [2024-07-14 10:32:26.083734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.408 10:32:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:41.408 10:32:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:41.408 10:32:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:41.408 10:32:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:41.408 10:32:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.408 10:32:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.408 10:32:26 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.v8w5xsNwG6 00:23:41.408 10:32:26 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.v8w5xsNwG6 00:23:41.408 10:32:26 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:41.408 [2024-07-14 10:32:26.364069] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.408 10:32:26 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:41.666 10:32:26 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:41.925 [2024-07-14 10:32:26.696927] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:41.925 [2024-07-14 10:32:26.697108] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.925 10:32:26 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:41.925 malloc0 00:23:41.925 10:32:26 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:42.184 10:32:27 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.v8w5xsNwG6 00:23:42.443 [2024-07-14 10:32:27.210441] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:42.443 10:32:27 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2454738 00:23:42.443 10:32:27 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:42.443 10:32:27 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:42.443 10:32:27 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2454738 /var/tmp/bdevperf.sock 00:23:42.443 10:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2454738 ']' 00:23:42.443 10:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:42.443 10:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:42.443 10:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:42.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:42.443 10:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:42.443 10:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.443 [2024-07-14 10:32:27.269640] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:42.443 [2024-07-14 10:32:27.269690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2454738 ] 00:23:42.443 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.443 [2024-07-14 10:32:27.338429] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.443 [2024-07-14 10:32:27.377472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.703 10:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:42.703 10:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:42.703 10:32:27 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.v8w5xsNwG6 00:23:42.703 [2024-07-14 10:32:27.629644] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:42.703 [2024-07-14 10:32:27.629715] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:42.963 TLSTESTn1 00:23:42.963 10:32:27 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:43.223 10:32:27 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:43.223 "subsystems": [ 00:23:43.223 { 00:23:43.223 "subsystem": "keyring", 00:23:43.223 "config": [] 00:23:43.223 }, 00:23:43.223 { 00:23:43.223 "subsystem": "iobuf", 00:23:43.223 "config": [ 00:23:43.223 { 00:23:43.223 "method": "iobuf_set_options", 00:23:43.223 "params": { 00:23:43.223 "small_pool_count": 8192, 00:23:43.223 "large_pool_count": 1024, 00:23:43.223 "small_bufsize": 8192, 00:23:43.223 "large_bufsize": 135168 00:23:43.223 } 00:23:43.223 } 00:23:43.223 ] 00:23:43.223 }, 00:23:43.223 { 00:23:43.223 "subsystem": "sock", 00:23:43.223 "config": [ 00:23:43.223 { 00:23:43.223 "method": "sock_set_default_impl", 00:23:43.223 "params": { 00:23:43.223 "impl_name": "posix" 00:23:43.223 } 00:23:43.223 }, 00:23:43.223 { 00:23:43.223 "method": "sock_impl_set_options", 00:23:43.223 "params": { 00:23:43.223 "impl_name": "ssl", 00:23:43.223 "recv_buf_size": 4096, 00:23:43.223 "send_buf_size": 4096, 00:23:43.223 "enable_recv_pipe": true, 00:23:43.223 "enable_quickack": false, 00:23:43.223 "enable_placement_id": 0, 00:23:43.223 "enable_zerocopy_send_server": true, 00:23:43.223 "enable_zerocopy_send_client": false, 00:23:43.223 "zerocopy_threshold": 0, 00:23:43.223 "tls_version": 0, 00:23:43.223 "enable_ktls": false 00:23:43.223 } 00:23:43.223 }, 00:23:43.223 { 00:23:43.223 "method": "sock_impl_set_options", 00:23:43.223 "params": { 00:23:43.223 "impl_name": "posix", 00:23:43.223 "recv_buf_size": 2097152, 00:23:43.223 "send_buf_size": 2097152, 00:23:43.223 "enable_recv_pipe": true, 00:23:43.223 "enable_quickack": false, 00:23:43.223 "enable_placement_id": 0, 00:23:43.223 "enable_zerocopy_send_server": true, 00:23:43.223 "enable_zerocopy_send_client": false, 00:23:43.223 "zerocopy_threshold": 0, 00:23:43.223 "tls_version": 0, 00:23:43.223 "enable_ktls": false 00:23:43.223 } 00:23:43.223 } 00:23:43.223 ] 00:23:43.223 }, 00:23:43.223 { 00:23:43.223 "subsystem": "vmd", 00:23:43.223 "config": [] 00:23:43.223 }, 00:23:43.223 { 00:23:43.223 "subsystem": "accel", 00:23:43.223 "config": [ 00:23:43.223 { 00:23:43.223 "method": "accel_set_options", 00:23:43.223 "params": { 00:23:43.223 "small_cache_size": 128, 00:23:43.223 "large_cache_size": 16, 00:23:43.223 "task_count": 2048, 00:23:43.223 "sequence_count": 2048, 00:23:43.223 "buf_count": 2048 00:23:43.223 } 00:23:43.223 } 00:23:43.223 ] 00:23:43.223 }, 00:23:43.223 { 00:23:43.223 "subsystem": "bdev", 00:23:43.223 "config": [ 00:23:43.223 { 00:23:43.223 "method": "bdev_set_options", 00:23:43.223 "params": { 00:23:43.223 "bdev_io_pool_size": 65535, 00:23:43.223 "bdev_io_cache_size": 256, 00:23:43.223 "bdev_auto_examine": true, 00:23:43.223 "iobuf_small_cache_size": 128, 00:23:43.223 "iobuf_large_cache_size": 16 00:23:43.223 } 00:23:43.223 }, 00:23:43.223 { 00:23:43.223 "method": "bdev_raid_set_options", 00:23:43.223 "params": { 00:23:43.223 "process_window_size_kb": 1024 00:23:43.223 } 00:23:43.223 }, 00:23:43.223 { 00:23:43.223 "method": "bdev_iscsi_set_options", 00:23:43.223 "params": { 00:23:43.223 "timeout_sec": 30 00:23:43.223 } 00:23:43.223 }, 00:23:43.223 { 00:23:43.223 "method": "bdev_nvme_set_options", 00:23:43.223 "params": { 00:23:43.223 "action_on_timeout": "none", 00:23:43.223 "timeout_us": 0, 00:23:43.223 "timeout_admin_us": 0, 00:23:43.223 "keep_alive_timeout_ms": 10000, 00:23:43.223 "arbitration_burst": 0, 00:23:43.223 "low_priority_weight": 0, 00:23:43.223 "medium_priority_weight": 0, 00:23:43.223 "high_priority_weight": 0, 00:23:43.223 "nvme_adminq_poll_period_us": 10000, 00:23:43.223 "nvme_ioq_poll_period_us": 0, 00:23:43.223 "io_queue_requests": 0, 00:23:43.223 "delay_cmd_submit": true, 00:23:43.223 "transport_retry_count": 4, 00:23:43.223 "bdev_retry_count": 3, 00:23:43.223 "transport_ack_timeout": 0, 00:23:43.223 "ctrlr_loss_timeout_sec": 0, 00:23:43.223 "reconnect_delay_sec": 0, 00:23:43.223 "fast_io_fail_timeout_sec": 0, 00:23:43.223 "disable_auto_failback": false, 00:23:43.223 "generate_uuids": false, 00:23:43.223 "transport_tos": 0, 00:23:43.223 "nvme_error_stat": false, 00:23:43.223 "rdma_srq_size": 0, 00:23:43.223 "io_path_stat": false, 00:23:43.223 "allow_accel_sequence": false, 00:23:43.223 "rdma_max_cq_size": 0, 00:23:43.223 "rdma_cm_event_timeout_ms": 0, 00:23:43.223 "dhchap_digests": [ 00:23:43.223 "sha256", 00:23:43.223 "sha384", 00:23:43.223 "sha512" 00:23:43.223 ], 00:23:43.223 "dhchap_dhgroups": [ 00:23:43.223 "null", 00:23:43.223 "ffdhe2048", 00:23:43.223 "ffdhe3072", 00:23:43.223 "ffdhe4096", 00:23:43.223 "ffdhe6144", 00:23:43.223 "ffdhe8192" 00:23:43.223 ] 00:23:43.223 } 00:23:43.223 }, 00:23:43.223 { 00:23:43.224 "method": "bdev_nvme_set_hotplug", 00:23:43.224 "params": { 00:23:43.224 "period_us": 100000, 00:23:43.224 "enable": false 00:23:43.224 } 00:23:43.224 }, 00:23:43.224 { 00:23:43.224 "method": "bdev_malloc_create", 00:23:43.224 "params": { 00:23:43.224 "name": "malloc0", 00:23:43.224 "num_blocks": 8192, 00:23:43.224 "block_size": 4096, 00:23:43.224 "physical_block_size": 4096, 00:23:43.224 "uuid": "3c3cc3cb-99ab-40b6-8dbf-9b12808ca772", 00:23:43.224 "optimal_io_boundary": 0 00:23:43.224 } 00:23:43.224 }, 00:23:43.224 { 00:23:43.224 "method": "bdev_wait_for_examine" 00:23:43.224 } 00:23:43.224 ] 00:23:43.224 }, 00:23:43.224 { 00:23:43.224 "subsystem": "nbd", 00:23:43.224 "config": [] 00:23:43.224 }, 00:23:43.224 { 00:23:43.224 "subsystem": "scheduler", 00:23:43.224 "config": [ 00:23:43.224 { 00:23:43.224 "method": "framework_set_scheduler", 00:23:43.224 "params": { 00:23:43.224 "name": "static" 00:23:43.224 } 00:23:43.224 } 00:23:43.224 ] 00:23:43.224 }, 00:23:43.224 { 00:23:43.224 "subsystem": "nvmf", 00:23:43.224 "config": [ 00:23:43.224 { 00:23:43.224 "method": "nvmf_set_config", 00:23:43.224 "params": { 00:23:43.224 "discovery_filter": "match_any", 00:23:43.224 "admin_cmd_passthru": { 00:23:43.224 "identify_ctrlr": false 00:23:43.224 } 00:23:43.224 } 00:23:43.224 }, 00:23:43.224 { 00:23:43.224 "method": "nvmf_set_max_subsystems", 00:23:43.224 "params": { 00:23:43.224 "max_subsystems": 1024 00:23:43.224 } 00:23:43.224 }, 00:23:43.224 { 00:23:43.224 "method": "nvmf_set_crdt", 00:23:43.224 "params": { 00:23:43.224 "crdt1": 0, 00:23:43.224 "crdt2": 0, 00:23:43.224 "crdt3": 0 00:23:43.224 } 00:23:43.224 }, 00:23:43.224 { 00:23:43.224 "method": "nvmf_create_transport", 00:23:43.224 "params": { 00:23:43.224 "trtype": "TCP", 00:23:43.224 "max_queue_depth": 128, 00:23:43.224 "max_io_qpairs_per_ctrlr": 127, 00:23:43.224 "in_capsule_data_size": 4096, 00:23:43.224 "max_io_size": 131072, 00:23:43.224 "io_unit_size": 131072, 00:23:43.224 "max_aq_depth": 128, 00:23:43.224 "num_shared_buffers": 511, 00:23:43.224 "buf_cache_size": 4294967295, 00:23:43.224 "dif_insert_or_strip": false, 00:23:43.224 "zcopy": false, 00:23:43.224 "c2h_success": false, 00:23:43.224 "sock_priority": 0, 00:23:43.224 "abort_timeout_sec": 1, 00:23:43.224 "ack_timeout": 0, 00:23:43.224 "data_wr_pool_size": 0 00:23:43.224 } 00:23:43.224 }, 00:23:43.224 { 00:23:43.224 "method": "nvmf_create_subsystem", 00:23:43.224 "params": { 00:23:43.224 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.224 "allow_any_host": false, 00:23:43.224 "serial_number": "SPDK00000000000001", 00:23:43.224 "model_number": "SPDK bdev Controller", 00:23:43.224 "max_namespaces": 10, 00:23:43.224 "min_cntlid": 1, 00:23:43.224 "max_cntlid": 65519, 00:23:43.224 "ana_reporting": false 00:23:43.224 } 00:23:43.224 }, 00:23:43.224 { 00:23:43.224 "method": "nvmf_subsystem_add_host", 00:23:43.224 "params": { 00:23:43.224 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.224 "host": "nqn.2016-06.io.spdk:host1", 00:23:43.224 "psk": "/tmp/tmp.v8w5xsNwG6" 00:23:43.224 } 00:23:43.224 }, 00:23:43.224 { 00:23:43.224 "method": "nvmf_subsystem_add_ns", 00:23:43.224 "params": { 00:23:43.224 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.224 "namespace": { 00:23:43.224 "nsid": 1, 00:23:43.224 "bdev_name": "malloc0", 00:23:43.224 "nguid": "3C3CC3CB99AB40B68DBF9B12808CA772", 00:23:43.224 "uuid": "3c3cc3cb-99ab-40b6-8dbf-9b12808ca772", 00:23:43.224 "no_auto_visible": false 00:23:43.224 } 00:23:43.224 } 00:23:43.224 }, 00:23:43.224 { 00:23:43.224 "method": "nvmf_subsystem_add_listener", 00:23:43.224 "params": { 00:23:43.224 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.224 "listen_address": { 00:23:43.224 "trtype": "TCP", 00:23:43.224 "adrfam": "IPv4", 00:23:43.224 "traddr": "10.0.0.2", 00:23:43.224 "trsvcid": "4420" 00:23:43.224 }, 00:23:43.224 "secure_channel": true 00:23:43.224 } 00:23:43.224 } 00:23:43.224 ] 00:23:43.224 } 00:23:43.224 ] 00:23:43.224 }' 00:23:43.224 10:32:27 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:43.484 10:32:28 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:43.484 "subsystems": [ 00:23:43.484 { 00:23:43.484 "subsystem": "keyring", 00:23:43.484 "config": [] 00:23:43.484 }, 00:23:43.484 { 00:23:43.484 "subsystem": "iobuf", 00:23:43.484 "config": [ 00:23:43.484 { 00:23:43.484 "method": "iobuf_set_options", 00:23:43.484 "params": { 00:23:43.484 "small_pool_count": 8192, 00:23:43.484 "large_pool_count": 1024, 00:23:43.484 "small_bufsize": 8192, 00:23:43.484 "large_bufsize": 135168 00:23:43.484 } 00:23:43.484 } 00:23:43.484 ] 00:23:43.484 }, 00:23:43.484 { 00:23:43.484 "subsystem": "sock", 00:23:43.484 "config": [ 00:23:43.484 { 00:23:43.484 "method": "sock_set_default_impl", 00:23:43.484 "params": { 00:23:43.484 "impl_name": "posix" 00:23:43.484 } 00:23:43.484 }, 00:23:43.484 { 00:23:43.484 "method": "sock_impl_set_options", 00:23:43.484 "params": { 00:23:43.484 "impl_name": "ssl", 00:23:43.484 "recv_buf_size": 4096, 00:23:43.484 "send_buf_size": 4096, 00:23:43.484 "enable_recv_pipe": true, 00:23:43.484 "enable_quickack": false, 00:23:43.484 "enable_placement_id": 0, 00:23:43.484 "enable_zerocopy_send_server": true, 00:23:43.484 "enable_zerocopy_send_client": false, 00:23:43.484 "zerocopy_threshold": 0, 00:23:43.484 "tls_version": 0, 00:23:43.484 "enable_ktls": false 00:23:43.484 } 00:23:43.484 }, 00:23:43.484 { 00:23:43.484 "method": "sock_impl_set_options", 00:23:43.484 "params": { 00:23:43.484 "impl_name": "posix", 00:23:43.484 "recv_buf_size": 2097152, 00:23:43.484 "send_buf_size": 2097152, 00:23:43.484 "enable_recv_pipe": true, 00:23:43.484 "enable_quickack": false, 00:23:43.484 "enable_placement_id": 0, 00:23:43.484 "enable_zerocopy_send_server": true, 00:23:43.484 "enable_zerocopy_send_client": false, 00:23:43.484 "zerocopy_threshold": 0, 00:23:43.484 "tls_version": 0, 00:23:43.484 "enable_ktls": false 00:23:43.484 } 00:23:43.484 } 00:23:43.484 ] 00:23:43.484 }, 00:23:43.484 { 00:23:43.484 "subsystem": "vmd", 00:23:43.484 "config": [] 00:23:43.484 }, 00:23:43.484 { 00:23:43.484 "subsystem": "accel", 00:23:43.484 "config": [ 00:23:43.484 { 00:23:43.484 "method": "accel_set_options", 00:23:43.484 "params": { 00:23:43.484 "small_cache_size": 128, 00:23:43.484 "large_cache_size": 16, 00:23:43.484 "task_count": 2048, 00:23:43.484 "sequence_count": 2048, 00:23:43.484 "buf_count": 2048 00:23:43.484 } 00:23:43.484 } 00:23:43.484 ] 00:23:43.484 }, 00:23:43.484 { 00:23:43.484 "subsystem": "bdev", 00:23:43.484 "config": [ 00:23:43.484 { 00:23:43.484 "method": "bdev_set_options", 00:23:43.484 "params": { 00:23:43.484 "bdev_io_pool_size": 65535, 00:23:43.484 "bdev_io_cache_size": 256, 00:23:43.484 "bdev_auto_examine": true, 00:23:43.484 "iobuf_small_cache_size": 128, 00:23:43.484 "iobuf_large_cache_size": 16 00:23:43.484 } 00:23:43.484 }, 00:23:43.484 { 00:23:43.484 "method": "bdev_raid_set_options", 00:23:43.484 "params": { 00:23:43.484 "process_window_size_kb": 1024 00:23:43.484 } 00:23:43.484 }, 00:23:43.484 { 00:23:43.484 "method": "bdev_iscsi_set_options", 00:23:43.484 "params": { 00:23:43.484 "timeout_sec": 30 00:23:43.484 } 00:23:43.484 }, 00:23:43.484 { 00:23:43.484 "method": "bdev_nvme_set_options", 00:23:43.484 "params": { 00:23:43.484 "action_on_timeout": "none", 00:23:43.484 "timeout_us": 0, 00:23:43.484 "timeout_admin_us": 0, 00:23:43.484 "keep_alive_timeout_ms": 10000, 00:23:43.484 "arbitration_burst": 0, 00:23:43.484 "low_priority_weight": 0, 00:23:43.484 "medium_priority_weight": 0, 00:23:43.484 "high_priority_weight": 0, 00:23:43.484 "nvme_adminq_poll_period_us": 10000, 00:23:43.484 "nvme_ioq_poll_period_us": 0, 00:23:43.484 "io_queue_requests": 512, 00:23:43.484 "delay_cmd_submit": true, 00:23:43.484 "transport_retry_count": 4, 00:23:43.484 "bdev_retry_count": 3, 00:23:43.484 "transport_ack_timeout": 0, 00:23:43.484 "ctrlr_loss_timeout_sec": 0, 00:23:43.484 "reconnect_delay_sec": 0, 00:23:43.484 "fast_io_fail_timeout_sec": 0, 00:23:43.484 "disable_auto_failback": false, 00:23:43.484 "generate_uuids": false, 00:23:43.484 "transport_tos": 0, 00:23:43.484 "nvme_error_stat": false, 00:23:43.484 "rdma_srq_size": 0, 00:23:43.484 "io_path_stat": false, 00:23:43.484 "allow_accel_sequence": false, 00:23:43.484 "rdma_max_cq_size": 0, 00:23:43.484 "rdma_cm_event_timeout_ms": 0, 00:23:43.484 "dhchap_digests": [ 00:23:43.484 "sha256", 00:23:43.484 "sha384", 00:23:43.484 "sha512" 00:23:43.484 ], 00:23:43.484 "dhchap_dhgroups": [ 00:23:43.484 "null", 00:23:43.484 "ffdhe2048", 00:23:43.484 "ffdhe3072", 00:23:43.484 "ffdhe4096", 00:23:43.484 "ffdhe6144", 00:23:43.484 "ffdhe8192" 00:23:43.484 ] 00:23:43.484 } 00:23:43.484 }, 00:23:43.484 { 00:23:43.484 "method": "bdev_nvme_attach_controller", 00:23:43.484 "params": { 00:23:43.484 "name": "TLSTEST", 00:23:43.484 "trtype": "TCP", 00:23:43.484 "adrfam": "IPv4", 00:23:43.484 "traddr": "10.0.0.2", 00:23:43.484 "trsvcid": "4420", 00:23:43.484 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.484 "prchk_reftag": false, 00:23:43.484 "prchk_guard": false, 00:23:43.484 "ctrlr_loss_timeout_sec": 0, 00:23:43.484 "reconnect_delay_sec": 0, 00:23:43.484 "fast_io_fail_timeout_sec": 0, 00:23:43.484 "psk": "/tmp/tmp.v8w5xsNwG6", 00:23:43.484 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:43.484 "hdgst": false, 00:23:43.484 "ddgst": false 00:23:43.484 } 00:23:43.484 }, 00:23:43.484 { 00:23:43.484 "method": "bdev_nvme_set_hotplug", 00:23:43.484 "params": { 00:23:43.484 "period_us": 100000, 00:23:43.484 "enable": false 00:23:43.484 } 00:23:43.484 }, 00:23:43.484 { 00:23:43.484 "method": "bdev_wait_for_examine" 00:23:43.484 } 00:23:43.484 ] 00:23:43.484 }, 00:23:43.484 { 00:23:43.484 "subsystem": "nbd", 00:23:43.484 "config": [] 00:23:43.484 } 00:23:43.484 ] 00:23:43.484 }' 00:23:43.485 10:32:28 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2454738 00:23:43.485 10:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2454738 ']' 00:23:43.485 10:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2454738 00:23:43.485 10:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:43.485 10:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:43.485 10:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2454738 00:23:43.485 10:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:43.485 10:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:43.485 10:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2454738' 00:23:43.485 killing process with pid 2454738 00:23:43.485 10:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2454738 00:23:43.485 Received shutdown signal, test time was about 10.000000 seconds 00:23:43.485 00:23:43.485 Latency(us) 00:23:43.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.485 =================================================================================================================== 00:23:43.485 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:43.485 [2024-07-14 10:32:28.264919] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:43.485 10:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2454738 00:23:43.485 10:32:28 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2454508 00:23:43.485 10:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2454508 ']' 00:23:43.485 10:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2454508 00:23:43.485 10:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:43.485 10:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:43.485 10:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2454508 00:23:43.744 10:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:43.744 10:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:43.744 10:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2454508' 00:23:43.744 killing process with pid 2454508 00:23:43.744 10:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2454508 00:23:43.744 [2024-07-14 10:32:28.478915] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:43.744 10:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2454508 00:23:43.744 10:32:28 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:43.744 10:32:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:43.744 10:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:43.744 10:32:28 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:43.744 "subsystems": [ 00:23:43.744 { 00:23:43.744 "subsystem": "keyring", 00:23:43.744 "config": [] 00:23:43.744 }, 00:23:43.744 { 00:23:43.744 "subsystem": "iobuf", 00:23:43.744 "config": [ 00:23:43.744 { 00:23:43.744 "method": "iobuf_set_options", 00:23:43.744 "params": { 00:23:43.744 "small_pool_count": 8192, 00:23:43.744 "large_pool_count": 1024, 00:23:43.744 "small_bufsize": 8192, 00:23:43.744 "large_bufsize": 135168 00:23:43.744 } 00:23:43.744 } 00:23:43.744 ] 00:23:43.744 }, 00:23:43.744 { 00:23:43.744 "subsystem": "sock", 00:23:43.744 "config": [ 00:23:43.744 { 00:23:43.744 "method": "sock_set_default_impl", 00:23:43.744 "params": { 00:23:43.744 "impl_name": "posix" 00:23:43.744 } 00:23:43.744 }, 00:23:43.744 { 00:23:43.744 "method": "sock_impl_set_options", 00:23:43.744 "params": { 00:23:43.744 "impl_name": "ssl", 00:23:43.744 "recv_buf_size": 4096, 00:23:43.744 "send_buf_size": 4096, 00:23:43.744 "enable_recv_pipe": true, 00:23:43.744 "enable_quickack": false, 00:23:43.744 "enable_placement_id": 0, 00:23:43.744 "enable_zerocopy_send_server": true, 00:23:43.744 "enable_zerocopy_send_client": false, 00:23:43.744 "zerocopy_threshold": 0, 00:23:43.744 "tls_version": 0, 00:23:43.744 "enable_ktls": false 00:23:43.744 } 00:23:43.744 }, 00:23:43.744 { 00:23:43.744 "method": "sock_impl_set_options", 00:23:43.744 "params": { 00:23:43.744 "impl_name": "posix", 00:23:43.744 "recv_buf_size": 2097152, 00:23:43.744 "send_buf_size": 2097152, 00:23:43.744 "enable_recv_pipe": true, 00:23:43.744 "enable_quickack": false, 00:23:43.744 "enable_placement_id": 0, 00:23:43.744 "enable_zerocopy_send_server": true, 00:23:43.744 "enable_zerocopy_send_client": false, 00:23:43.744 "zerocopy_threshold": 0, 00:23:43.744 "tls_version": 0, 00:23:43.744 "enable_ktls": false 00:23:43.744 } 00:23:43.744 } 00:23:43.744 ] 00:23:43.744 }, 00:23:43.744 { 00:23:43.744 "subsystem": "vmd", 00:23:43.744 "config": [] 00:23:43.744 }, 00:23:43.744 { 00:23:43.744 "subsystem": "accel", 00:23:43.744 "config": [ 00:23:43.744 { 00:23:43.744 "method": "accel_set_options", 00:23:43.744 "params": { 00:23:43.744 "small_cache_size": 128, 00:23:43.744 "large_cache_size": 16, 00:23:43.745 "task_count": 2048, 00:23:43.745 "sequence_count": 2048, 00:23:43.745 "buf_count": 2048 00:23:43.745 } 00:23:43.745 } 00:23:43.745 ] 00:23:43.745 }, 00:23:43.745 { 00:23:43.745 "subsystem": "bdev", 00:23:43.745 "config": [ 00:23:43.745 { 00:23:43.745 "method": "bdev_set_options", 00:23:43.745 "params": { 00:23:43.745 "bdev_io_pool_size": 65535, 00:23:43.745 "bdev_io_cache_size": 256, 00:23:43.745 "bdev_auto_examine": true, 00:23:43.745 "iobuf_small_cache_size": 128, 00:23:43.745 "iobuf_large_cache_size": 16 00:23:43.745 } 00:23:43.745 }, 00:23:43.745 { 00:23:43.745 "method": "bdev_raid_set_options", 00:23:43.745 "params": { 00:23:43.745 "process_window_size_kb": 1024 00:23:43.745 } 00:23:43.745 }, 00:23:43.745 { 00:23:43.745 "method": "bdev_iscsi_set_options", 00:23:43.745 "params": { 00:23:43.745 "timeout_sec": 30 00:23:43.745 } 00:23:43.745 }, 00:23:43.745 { 00:23:43.745 "method": "bdev_nvme_set_options", 00:23:43.745 "params": { 00:23:43.745 "action_on_timeout": "none", 00:23:43.745 "timeout_us": 0, 00:23:43.745 "timeout_admin_us": 0, 00:23:43.745 "keep_alive_timeout_ms": 10000, 00:23:43.745 "arbitration_burst": 0, 00:23:43.745 "low_priority_weight": 0, 00:23:43.745 "medium_priority_weight": 0, 00:23:43.745 "high_priority_weight": 0, 00:23:43.745 "nvme_adminq_poll_period_us": 10000, 00:23:43.745 "nvme_ioq_poll_period_us": 0, 00:23:43.745 "io_queue_requests": 0, 00:23:43.745 "delay_cmd_submit": true, 00:23:43.745 "transport_retry_count": 4, 00:23:43.745 "bdev_retry_count": 3, 00:23:43.745 "transport_ack_timeout": 0, 00:23:43.745 "ctrlr_loss_timeout_sec": 0, 00:23:43.745 "reconnect_delay_sec": 0, 00:23:43.745 "fast_io_fail_timeout_sec": 0, 00:23:43.745 "disable_auto_failback": false, 00:23:43.745 "generate_uuids": false, 00:23:43.745 "transport_tos": 0, 00:23:43.745 "nvme_error_stat": false, 00:23:43.745 "rdma_srq_size": 0, 00:23:43.745 "io_path_stat": false, 00:23:43.745 "allow_accel_sequence": false, 00:23:43.745 "rdma_max_cq_size": 0, 00:23:43.745 "rdma_cm_event_timeout_ms": 0, 00:23:43.745 "dhchap_digests": [ 00:23:43.745 "sha256", 00:23:43.745 "sha384", 00:23:43.745 "sha512" 00:23:43.745 ], 00:23:43.745 "dhchap_dhgroups": [ 00:23:43.745 "null", 00:23:43.745 "ffdhe2048", 00:23:43.745 "ffdhe3072", 00:23:43.745 "ffdhe4096", 00:23:43.745 "ffdhe6144", 00:23:43.745 "ffdhe8192" 00:23:43.745 ] 00:23:43.745 } 00:23:43.745 }, 00:23:43.745 { 00:23:43.745 "method": "bdev_nvme_set_hotplug", 00:23:43.745 "params": { 00:23:43.745 "period_us": 100000, 00:23:43.745 "enable": false 00:23:43.745 } 00:23:43.745 }, 00:23:43.745 { 00:23:43.745 "method": "bdev_malloc_create", 00:23:43.745 "params": { 00:23:43.745 "name": "malloc0", 00:23:43.745 "num_blocks": 8192, 00:23:43.745 "block_size": 4096, 00:23:43.745 "physical_block_size": 4096, 00:23:43.745 "uuid": "3c3cc3cb-99ab-40b6-8dbf-9b12808ca772", 00:23:43.745 "optimal_io_boundary": 0 00:23:43.745 } 00:23:43.745 }, 00:23:43.745 { 00:23:43.745 "method": "bdev_wait_for_examine" 00:23:43.745 } 00:23:43.745 ] 00:23:43.745 }, 00:23:43.745 { 00:23:43.745 "subsystem": "nbd", 00:23:43.745 "config": [] 00:23:43.745 }, 00:23:43.745 { 00:23:43.745 "subsystem": "scheduler", 00:23:43.745 "config": [ 00:23:43.745 { 00:23:43.745 "method": "framework_set_scheduler", 00:23:43.745 "params": { 00:23:43.745 "name": "static" 00:23:43.745 } 00:23:43.745 } 00:23:43.745 ] 00:23:43.745 }, 00:23:43.745 { 00:23:43.745 "subsystem": "nvmf", 00:23:43.745 "config": [ 00:23:43.745 { 00:23:43.745 "method": "nvmf_set_config", 00:23:43.745 "params": { 00:23:43.745 "discovery_filter": "match_any", 00:23:43.745 "admin_cmd_passthru": { 00:23:43.745 "identify_ctrlr": false 00:23:43.745 } 00:23:43.745 } 00:23:43.745 }, 00:23:43.745 { 00:23:43.745 "method": "nvmf_set_max_subsystems", 00:23:43.745 "params": { 00:23:43.745 "max_subsystems": 1024 00:23:43.745 } 00:23:43.745 }, 00:23:43.745 { 00:23:43.745 "method": "nvmf_set_crdt", 00:23:43.745 "params": { 00:23:43.745 "crdt1": 0, 00:23:43.745 "crdt2": 0, 00:23:43.745 "crdt3": 0 00:23:43.745 } 00:23:43.745 }, 00:23:43.745 { 00:23:43.745 "method": "nvmf_create_transport", 00:23:43.745 "params": { 00:23:43.745 "trtype": "TCP", 00:23:43.745 "max_queue_depth": 128, 00:23:43.745 "max_io_qpairs_per_ctrlr": 127, 00:23:43.745 "in_capsule_data_size": 4096, 00:23:43.745 "max_io_size": 131072, 00:23:43.745 "io_unit_size": 131072, 00:23:43.745 "max_aq_depth": 128, 00:23:43.745 "num_shared_buffers": 511, 00:23:43.745 "buf_cache_size": 4294967295, 00:23:43.745 "dif_insert_or_strip": false, 00:23:43.745 "zcopy": false, 00:23:43.745 "c2h_success": false, 00:23:43.745 "sock_priority": 0, 00:23:43.745 "abort_timeout_sec": 1, 00:23:43.745 "ack_timeout": 0, 00:23:43.745 "data_wr_pool_size": 0 00:23:43.745 } 00:23:43.745 }, 00:23:43.745 { 00:23:43.745 "method": "nvmf_create_subsystem", 00:23:43.745 "params": { 00:23:43.745 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.745 "allow_any_host": false, 00:23:43.745 "serial_number": "SPDK00000000000001", 00:23:43.745 "model_number": "SPDK bdev Controller", 00:23:43.745 "max_namespaces": 10, 00:23:43.745 "min_cntlid": 1, 00:23:43.745 "max_cntlid": 65519, 00:23:43.745 "ana_reporting": false 00:23:43.745 } 00:23:43.745 }, 00:23:43.745 { 00:23:43.745 "method": "nvmf_subsystem_add_host", 00:23:43.745 "params": { 00:23:43.745 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.745 "host": "nqn.2016-06.io.spdk:host1", 00:23:43.745 "psk": "/tmp/tmp.v8w5xsNwG6" 00:23:43.745 } 00:23:43.745 }, 00:23:43.745 { 00:23:43.745 "method": "nvmf_subsystem_add_ns", 00:23:43.745 "params": { 00:23:43.745 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.745 "namespace": { 00:23:43.745 "nsid": 1, 00:23:43.745 "bdev_name": "malloc0", 00:23:43.745 "nguid": "3C3CC3CB99AB40B68DBF9B12808CA772", 00:23:43.745 "uuid": "3c3cc3cb-99ab-40b6-8dbf-9b12808ca772", 00:23:43.745 "no_auto_visible": false 00:23:43.745 } 00:23:43.745 } 00:23:43.745 }, 00:23:43.745 { 00:23:43.745 "method": "nvmf_subsystem_add_listener", 00:23:43.745 "params": { 00:23:43.745 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.745 "listen_address": { 00:23:43.745 "trtype": "TCP", 00:23:43.745 "adrfam": "IPv4", 00:23:43.745 "traddr": "10.0.0.2", 00:23:43.745 "trsvcid": "4420" 00:23:43.745 }, 00:23:43.745 "secure_channel": true 00:23:43.745 } 00:23:43.745 } 00:23:43.745 ] 00:23:43.745 } 00:23:43.745 ] 00:23:43.745 }' 00:23:43.745 10:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.745 10:32:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2454981 00:23:43.745 10:32:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2454981 00:23:43.745 10:32:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:43.745 10:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2454981 ']' 00:23:43.745 10:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.745 10:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:43.745 10:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.745 10:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:43.745 10:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.745 [2024-07-14 10:32:28.709946] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:43.745 [2024-07-14 10:32:28.709993] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.005 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.005 [2024-07-14 10:32:28.766475] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.005 [2024-07-14 10:32:28.805515] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.005 [2024-07-14 10:32:28.805556] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.005 [2024-07-14 10:32:28.805563] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.005 [2024-07-14 10:32:28.805569] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.005 [2024-07-14 10:32:28.805574] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.005 [2024-07-14 10:32:28.805629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.264 [2024-07-14 10:32:29.003161] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.264 [2024-07-14 10:32:29.019133] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:44.264 [2024-07-14 10:32:29.035187] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:44.264 [2024-07-14 10:32:29.048347] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.833 10:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:44.833 10:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:44.833 10:32:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:44.833 10:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:44.833 10:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.833 10:32:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.833 10:32:29 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2455223 00:23:44.833 10:32:29 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2455223 /var/tmp/bdevperf.sock 00:23:44.833 10:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2455223 ']' 00:23:44.833 10:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:44.833 10:32:29 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:44.833 10:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:44.833 10:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:44.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:44.833 10:32:29 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:44.833 "subsystems": [ 00:23:44.833 { 00:23:44.833 "subsystem": "keyring", 00:23:44.833 "config": [] 00:23:44.833 }, 00:23:44.833 { 00:23:44.833 "subsystem": "iobuf", 00:23:44.833 "config": [ 00:23:44.833 { 00:23:44.833 "method": "iobuf_set_options", 00:23:44.833 "params": { 00:23:44.833 "small_pool_count": 8192, 00:23:44.833 "large_pool_count": 1024, 00:23:44.833 "small_bufsize": 8192, 00:23:44.833 "large_bufsize": 135168 00:23:44.833 } 00:23:44.833 } 00:23:44.833 ] 00:23:44.833 }, 00:23:44.833 { 00:23:44.833 "subsystem": "sock", 00:23:44.833 "config": [ 00:23:44.833 { 00:23:44.833 "method": "sock_set_default_impl", 00:23:44.833 "params": { 00:23:44.833 "impl_name": "posix" 00:23:44.833 } 00:23:44.833 }, 00:23:44.833 { 00:23:44.833 "method": "sock_impl_set_options", 00:23:44.833 "params": { 00:23:44.833 "impl_name": "ssl", 00:23:44.833 "recv_buf_size": 4096, 00:23:44.833 "send_buf_size": 4096, 00:23:44.833 "enable_recv_pipe": true, 00:23:44.833 "enable_quickack": false, 00:23:44.833 "enable_placement_id": 0, 00:23:44.833 "enable_zerocopy_send_server": true, 00:23:44.833 "enable_zerocopy_send_client": false, 00:23:44.833 "zerocopy_threshold": 0, 00:23:44.833 "tls_version": 0, 00:23:44.833 "enable_ktls": false 00:23:44.833 } 00:23:44.833 }, 00:23:44.833 { 00:23:44.833 "method": "sock_impl_set_options", 00:23:44.833 "params": { 00:23:44.833 "impl_name": "posix", 00:23:44.833 "recv_buf_size": 2097152, 00:23:44.833 "send_buf_size": 2097152, 00:23:44.833 "enable_recv_pipe": true, 00:23:44.833 "enable_quickack": false, 00:23:44.833 "enable_placement_id": 0, 00:23:44.833 "enable_zerocopy_send_server": true, 00:23:44.833 "enable_zerocopy_send_client": false, 00:23:44.833 "zerocopy_threshold": 0, 00:23:44.833 "tls_version": 0, 00:23:44.833 "enable_ktls": false 00:23:44.833 } 00:23:44.833 } 00:23:44.833 ] 00:23:44.833 }, 00:23:44.833 { 00:23:44.833 "subsystem": "vmd", 00:23:44.833 "config": [] 00:23:44.833 }, 00:23:44.833 { 00:23:44.833 "subsystem": "accel", 00:23:44.833 "config": [ 00:23:44.833 { 00:23:44.833 "method": "accel_set_options", 00:23:44.833 "params": { 00:23:44.833 "small_cache_size": 128, 00:23:44.833 "large_cache_size": 16, 00:23:44.833 "task_count": 2048, 00:23:44.833 "sequence_count": 2048, 00:23:44.833 "buf_count": 2048 00:23:44.833 } 00:23:44.833 } 00:23:44.833 ] 00:23:44.833 }, 00:23:44.833 { 00:23:44.833 "subsystem": "bdev", 00:23:44.833 "config": [ 00:23:44.833 { 00:23:44.833 "method": "bdev_set_options", 00:23:44.833 "params": { 00:23:44.833 "bdev_io_pool_size": 65535, 00:23:44.833 "bdev_io_cache_size": 256, 00:23:44.833 "bdev_auto_examine": true, 00:23:44.833 "iobuf_small_cache_size": 128, 00:23:44.833 "iobuf_large_cache_size": 16 00:23:44.833 } 00:23:44.833 }, 00:23:44.833 { 00:23:44.833 "method": "bdev_raid_set_options", 00:23:44.833 "params": { 00:23:44.833 "process_window_size_kb": 1024 00:23:44.833 } 00:23:44.833 }, 00:23:44.833 { 00:23:44.833 "method": "bdev_iscsi_set_options", 00:23:44.833 "params": { 00:23:44.833 "timeout_sec": 30 00:23:44.833 } 00:23:44.833 }, 00:23:44.833 { 00:23:44.833 "method": "bdev_nvme_set_options", 00:23:44.833 "params": { 00:23:44.833 "action_on_timeout": "none", 00:23:44.833 "timeout_us": 0, 00:23:44.833 "timeout_admin_us": 0, 00:23:44.833 "keep_alive_timeout_ms": 10000, 00:23:44.833 "arbitration_burst": 0, 00:23:44.833 "low_priority_weight": 0, 00:23:44.833 "medium_priority_weight": 0, 00:23:44.833 "high_priority_weight": 0, 00:23:44.833 "nvme_adminq_poll_period_us": 10000, 00:23:44.833 "nvme_ioq_poll_period_us": 0, 00:23:44.833 "io_queue_requests": 512, 00:23:44.833 "delay_cmd_submit": true, 00:23:44.833 "transport_retry_count": 4, 00:23:44.833 "bdev_retry_count": 3, 00:23:44.833 "transport_ack_timeout": 0, 00:23:44.833 "ctrlr_loss_timeout_sec": 0, 00:23:44.833 "reconnect_delay_sec": 0, 00:23:44.833 "fast_io_fail_timeout_sec": 0, 00:23:44.833 "disable_auto_failback": false, 00:23:44.833 "generate_uuids": false, 00:23:44.833 "transport_tos": 0, 00:23:44.833 "nvme_error_stat": false, 00:23:44.833 "rdma_srq_size": 0, 00:23:44.833 "io_path_stat": false, 00:23:44.833 "allow_accel_sequence": false, 00:23:44.833 "rdma_max_cq_size": 0, 00:23:44.833 "rdma_cm_event_timeout_ms": 0, 00:23:44.833 "dhchap_digests": [ 00:23:44.833 "sha256", 00:23:44.833 "sha384", 00:23:44.834 "sha512" 00:23:44.834 ], 00:23:44.834 "dhchap_dhgroups": [ 00:23:44.834 "null", 00:23:44.834 "ffdhe2048", 00:23:44.834 "ffdhe3072", 00:23:44.834 "ffdhe4096", 00:23:44.834 "ffdhe6144", 00:23:44.834 "ffdhe8192" 00:23:44.834 ] 00:23:44.834 } 00:23:44.834 }, 00:23:44.834 { 00:23:44.834 "method": "bdev_nvme_attach_controller", 00:23:44.834 "params": { 00:23:44.834 "name": "TLSTEST", 00:23:44.834 "trtype": "TCP", 00:23:44.834 "adrfam": "IPv4", 00:23:44.834 "traddr": "10.0.0.2", 00:23:44.834 "trsvcid": "4420", 00:23:44.834 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.834 "prchk_reftag": false, 00:23:44.834 "prchk_guard": false, 00:23:44.834 "ctrlr_loss_timeout_sec": 0, 00:23:44.834 "reconnect_delay_sec": 0, 00:23:44.834 "fast_io_fail_timeout_sec": 0, 00:23:44.834 "psk": "/tmp/tmp.v8w5xsNwG6", 00:23:44.834 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:44.834 "hdgst": false, 00:23:44.834 "ddgst": false 00:23:44.834 } 00:23:44.834 }, 00:23:44.834 { 00:23:44.834 "method": "bdev_nvme_set_hotplug", 00:23:44.834 "params": { 00:23:44.834 "period_us": 100000, 00:23:44.834 "enable": false 00:23:44.834 } 00:23:44.834 }, 00:23:44.834 { 00:23:44.834 "method": "bdev_wait_for_examine" 00:23:44.834 } 00:23:44.834 ] 00:23:44.834 }, 00:23:44.834 { 00:23:44.834 "subsystem": "nbd", 00:23:44.834 "config": [] 00:23:44.834 } 00:23:44.834 ] 00:23:44.834 }' 00:23:44.834 10:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:44.834 10:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.834 [2024-07-14 10:32:29.594952] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:44.834 [2024-07-14 10:32:29.594997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2455223 ] 00:23:44.834 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.834 [2024-07-14 10:32:29.663235] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.834 [2024-07-14 10:32:29.702337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:45.093 [2024-07-14 10:32:29.840129] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:45.093 [2024-07-14 10:32:29.840220] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:45.660 10:32:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:45.660 10:32:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:45.660 10:32:30 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:45.660 Running I/O for 10 seconds... 00:23:55.634 00:23:55.634 Latency(us) 00:23:55.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.634 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:55.634 Verification LBA range: start 0x0 length 0x2000 00:23:55.634 TLSTESTn1 : 10.02 5456.52 21.31 0.00 0.00 23421.39 6354.14 24846.69 00:23:55.634 =================================================================================================================== 00:23:55.634 Total : 5456.52 21.31 0.00 0.00 23421.39 6354.14 24846.69 00:23:55.634 0 00:23:55.634 10:32:40 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:55.634 10:32:40 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2455223 00:23:55.634 10:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2455223 ']' 00:23:55.634 10:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2455223 00:23:55.634 10:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:55.634 10:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:55.634 10:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2455223 00:23:55.634 10:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:55.634 10:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:55.634 10:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2455223' 00:23:55.634 killing process with pid 2455223 00:23:55.634 10:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2455223 00:23:55.634 Received shutdown signal, test time was about 10.000000 seconds 00:23:55.634 00:23:55.634 Latency(us) 00:23:55.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.634 =================================================================================================================== 00:23:55.634 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:55.634 [2024-07-14 10:32:40.607740] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:55.634 10:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2455223 00:23:55.893 10:32:40 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2454981 00:23:55.893 10:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2454981 ']' 00:23:55.893 10:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2454981 00:23:55.893 10:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:55.893 10:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:55.893 10:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2454981 00:23:55.893 10:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:55.893 10:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:55.893 10:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2454981' 00:23:55.893 killing process with pid 2454981 00:23:55.893 10:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2454981 00:23:55.893 [2024-07-14 10:32:40.822595] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:55.893 10:32:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2454981 00:23:56.153 10:32:41 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:56.153 10:32:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:56.153 10:32:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:56.153 10:32:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.153 10:32:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2457065 00:23:56.153 10:32:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2457065 00:23:56.153 10:32:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:56.153 10:32:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2457065 ']' 00:23:56.153 10:32:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.153 10:32:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:56.153 10:32:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.153 10:32:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:56.154 10:32:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.154 [2024-07-14 10:32:41.061284] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:56.154 [2024-07-14 10:32:41.061334] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.154 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.154 [2024-07-14 10:32:41.131891] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.413 [2024-07-14 10:32:41.168036] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.413 [2024-07-14 10:32:41.168078] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.413 [2024-07-14 10:32:41.168085] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.413 [2024-07-14 10:32:41.168092] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.413 [2024-07-14 10:32:41.168100] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.413 [2024-07-14 10:32:41.168118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.981 10:32:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:56.981 10:32:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:56.981 10:32:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:56.981 10:32:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:56.981 10:32:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.981 10:32:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.981 10:32:41 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.v8w5xsNwG6 00:23:56.981 10:32:41 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.v8w5xsNwG6 00:23:56.981 10:32:41 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:57.240 [2024-07-14 10:32:42.066680] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.240 10:32:42 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:57.499 10:32:42 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:57.499 [2024-07-14 10:32:42.423586] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:57.499 [2024-07-14 10:32:42.423773] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.499 10:32:42 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:57.759 malloc0 00:23:57.759 10:32:42 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:58.018 10:32:42 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.v8w5xsNwG6 00:23:58.018 [2024-07-14 10:32:42.961120] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:58.018 10:32:42 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:58.018 10:32:42 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2457333 00:23:58.018 10:32:42 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:58.018 10:32:42 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2457333 /var/tmp/bdevperf.sock 00:23:58.018 10:32:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2457333 ']' 00:23:58.018 10:32:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:58.018 10:32:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:58.018 10:32:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:58.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:58.018 10:32:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:58.018 10:32:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.278 [2024-07-14 10:32:43.022667] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:58.278 [2024-07-14 10:32:43.022715] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2457333 ] 00:23:58.278 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.278 [2024-07-14 10:32:43.088627] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.278 [2024-07-14 10:32:43.129051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.278 10:32:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:58.278 10:32:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:58.278 10:32:43 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.v8w5xsNwG6 00:23:58.535 10:32:43 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:58.794 [2024-07-14 10:32:43.566386] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:58.794 nvme0n1 00:23:58.794 10:32:43 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:58.794 Running I/O for 1 seconds... 00:24:00.171 00:24:00.171 Latency(us) 00:24:00.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.171 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:00.171 Verification LBA range: start 0x0 length 0x2000 00:24:00.171 nvme0n1 : 1.02 5264.64 20.56 0.00 0.00 24119.61 4986.43 32597.04 00:24:00.171 =================================================================================================================== 00:24:00.171 Total : 5264.64 20.56 0.00 0.00 24119.61 4986.43 32597.04 00:24:00.172 0 00:24:00.172 10:32:44 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2457333 00:24:00.172 10:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2457333 ']' 00:24:00.172 10:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2457333 00:24:00.172 10:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:00.172 10:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:00.172 10:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2457333 00:24:00.172 10:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:00.172 10:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:00.172 10:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2457333' 00:24:00.172 killing process with pid 2457333 00:24:00.172 10:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2457333 00:24:00.172 Received shutdown signal, test time was about 1.000000 seconds 00:24:00.172 00:24:00.172 Latency(us) 00:24:00.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.172 =================================================================================================================== 00:24:00.172 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:00.172 10:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2457333 00:24:00.172 10:32:45 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2457065 00:24:00.172 10:32:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2457065 ']' 00:24:00.172 10:32:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2457065 00:24:00.172 10:32:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:00.172 10:32:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:00.172 10:32:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2457065 00:24:00.172 10:32:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:00.172 10:32:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:00.172 10:32:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2457065' 00:24:00.172 killing process with pid 2457065 00:24:00.172 10:32:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2457065 00:24:00.172 [2024-07-14 10:32:45.048315] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:00.172 10:32:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2457065 00:24:00.431 10:32:45 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:24:00.431 10:32:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:00.431 10:32:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:00.431 10:32:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.431 10:32:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2457791 00:24:00.431 10:32:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2457791 00:24:00.431 10:32:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:00.431 10:32:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2457791 ']' 00:24:00.431 10:32:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.431 10:32:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:00.431 10:32:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.431 10:32:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:00.431 10:32:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.431 [2024-07-14 10:32:45.289092] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:24:00.431 [2024-07-14 10:32:45.289140] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.431 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.431 [2024-07-14 10:32:45.361860] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.431 [2024-07-14 10:32:45.398245] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.431 [2024-07-14 10:32:45.398284] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.431 [2024-07-14 10:32:45.398290] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.431 [2024-07-14 10:32:45.398296] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.431 [2024-07-14 10:32:45.398301] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.431 [2024-07-14 10:32:45.398323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.369 10:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:01.369 10:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:01.369 10:32:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:01.369 10:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:01.369 10:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.369 10:32:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.369 10:32:46 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:24:01.369 10:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.369 10:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.369 [2024-07-14 10:32:46.140385] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.369 malloc0 00:24:01.369 [2024-07-14 10:32:46.168651] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:01.369 [2024-07-14 10:32:46.168836] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.369 10:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.369 10:32:46 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=2458015 00:24:01.369 10:32:46 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 2458015 /var/tmp/bdevperf.sock 00:24:01.369 10:32:46 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:01.369 10:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2458015 ']' 00:24:01.369 10:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:01.369 10:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:01.369 10:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:01.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:01.369 10:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:01.369 10:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.369 [2024-07-14 10:32:46.243268] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:24:01.369 [2024-07-14 10:32:46.243313] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2458015 ] 00:24:01.369 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.369 [2024-07-14 10:32:46.311792] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.628 [2024-07-14 10:32:46.352631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.628 10:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:01.628 10:32:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:01.628 10:32:46 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.v8w5xsNwG6 00:24:01.628 10:32:46 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:01.887 [2024-07-14 10:32:46.768962] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:01.887 nvme0n1 00:24:01.887 10:32:46 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:02.147 Running I/O for 1 seconds... 00:24:03.085 00:24:03.085 Latency(us) 00:24:03.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.085 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:03.085 Verification LBA range: start 0x0 length 0x2000 00:24:03.085 nvme0n1 : 1.02 5273.70 20.60 0.00 0.00 24019.12 7294.44 23251.03 00:24:03.085 =================================================================================================================== 00:24:03.085 Total : 5273.70 20.60 0.00 0.00 24019.12 7294.44 23251.03 00:24:03.085 0 00:24:03.085 10:32:47 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:24:03.085 10:32:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.085 10:32:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.344 10:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.344 10:32:48 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:24:03.344 "subsystems": [ 00:24:03.344 { 00:24:03.344 "subsystem": "keyring", 00:24:03.344 "config": [ 00:24:03.344 { 00:24:03.344 "method": "keyring_file_add_key", 00:24:03.344 "params": { 00:24:03.344 "name": "key0", 00:24:03.344 "path": "/tmp/tmp.v8w5xsNwG6" 00:24:03.344 } 00:24:03.344 } 00:24:03.344 ] 00:24:03.344 }, 00:24:03.344 { 00:24:03.344 "subsystem": "iobuf", 00:24:03.344 "config": [ 00:24:03.344 { 00:24:03.344 "method": "iobuf_set_options", 00:24:03.344 "params": { 00:24:03.344 "small_pool_count": 8192, 00:24:03.344 "large_pool_count": 1024, 00:24:03.344 "small_bufsize": 8192, 00:24:03.344 "large_bufsize": 135168 00:24:03.344 } 00:24:03.344 } 00:24:03.344 ] 00:24:03.344 }, 00:24:03.344 { 00:24:03.344 "subsystem": "sock", 00:24:03.344 "config": [ 00:24:03.344 { 00:24:03.344 "method": "sock_set_default_impl", 00:24:03.344 "params": { 00:24:03.344 "impl_name": "posix" 00:24:03.344 } 00:24:03.344 }, 00:24:03.344 { 00:24:03.344 "method": "sock_impl_set_options", 00:24:03.344 "params": { 00:24:03.344 "impl_name": "ssl", 00:24:03.344 "recv_buf_size": 4096, 00:24:03.344 "send_buf_size": 4096, 00:24:03.344 "enable_recv_pipe": true, 00:24:03.344 "enable_quickack": false, 00:24:03.344 "enable_placement_id": 0, 00:24:03.344 "enable_zerocopy_send_server": true, 00:24:03.344 "enable_zerocopy_send_client": false, 00:24:03.344 "zerocopy_threshold": 0, 00:24:03.344 "tls_version": 0, 00:24:03.344 "enable_ktls": false 00:24:03.344 } 00:24:03.344 }, 00:24:03.344 { 00:24:03.344 "method": "sock_impl_set_options", 00:24:03.344 "params": { 00:24:03.344 "impl_name": "posix", 00:24:03.344 "recv_buf_size": 2097152, 00:24:03.344 "send_buf_size": 2097152, 00:24:03.344 "enable_recv_pipe": true, 00:24:03.344 "enable_quickack": false, 00:24:03.344 "enable_placement_id": 0, 00:24:03.344 "enable_zerocopy_send_server": true, 00:24:03.344 "enable_zerocopy_send_client": false, 00:24:03.344 "zerocopy_threshold": 0, 00:24:03.344 "tls_version": 0, 00:24:03.344 "enable_ktls": false 00:24:03.344 } 00:24:03.344 } 00:24:03.344 ] 00:24:03.344 }, 00:24:03.344 { 00:24:03.344 "subsystem": "vmd", 00:24:03.344 "config": [] 00:24:03.344 }, 00:24:03.344 { 00:24:03.344 "subsystem": "accel", 00:24:03.344 "config": [ 00:24:03.344 { 00:24:03.344 "method": "accel_set_options", 00:24:03.344 "params": { 00:24:03.344 "small_cache_size": 128, 00:24:03.344 "large_cache_size": 16, 00:24:03.344 "task_count": 2048, 00:24:03.344 "sequence_count": 2048, 00:24:03.344 "buf_count": 2048 00:24:03.344 } 00:24:03.344 } 00:24:03.344 ] 00:24:03.344 }, 00:24:03.344 { 00:24:03.344 "subsystem": "bdev", 00:24:03.344 "config": [ 00:24:03.344 { 00:24:03.344 "method": "bdev_set_options", 00:24:03.344 "params": { 00:24:03.344 "bdev_io_pool_size": 65535, 00:24:03.344 "bdev_io_cache_size": 256, 00:24:03.344 "bdev_auto_examine": true, 00:24:03.344 "iobuf_small_cache_size": 128, 00:24:03.344 "iobuf_large_cache_size": 16 00:24:03.344 } 00:24:03.345 }, 00:24:03.345 { 00:24:03.345 "method": "bdev_raid_set_options", 00:24:03.345 "params": { 00:24:03.345 "process_window_size_kb": 1024 00:24:03.345 } 00:24:03.345 }, 00:24:03.345 { 00:24:03.345 "method": "bdev_iscsi_set_options", 00:24:03.345 "params": { 00:24:03.345 "timeout_sec": 30 00:24:03.345 } 00:24:03.345 }, 00:24:03.345 { 00:24:03.345 "method": "bdev_nvme_set_options", 00:24:03.345 "params": { 00:24:03.345 "action_on_timeout": "none", 00:24:03.345 "timeout_us": 0, 00:24:03.345 "timeout_admin_us": 0, 00:24:03.345 "keep_alive_timeout_ms": 10000, 00:24:03.345 "arbitration_burst": 0, 00:24:03.345 "low_priority_weight": 0, 00:24:03.345 "medium_priority_weight": 0, 00:24:03.345 "high_priority_weight": 0, 00:24:03.345 "nvme_adminq_poll_period_us": 10000, 00:24:03.345 "nvme_ioq_poll_period_us": 0, 00:24:03.345 "io_queue_requests": 0, 00:24:03.345 "delay_cmd_submit": true, 00:24:03.345 "transport_retry_count": 4, 00:24:03.345 "bdev_retry_count": 3, 00:24:03.345 "transport_ack_timeout": 0, 00:24:03.345 "ctrlr_loss_timeout_sec": 0, 00:24:03.345 "reconnect_delay_sec": 0, 00:24:03.345 "fast_io_fail_timeout_sec": 0, 00:24:03.345 "disable_auto_failback": false, 00:24:03.345 "generate_uuids": false, 00:24:03.345 "transport_tos": 0, 00:24:03.345 "nvme_error_stat": false, 00:24:03.345 "rdma_srq_size": 0, 00:24:03.345 "io_path_stat": false, 00:24:03.345 "allow_accel_sequence": false, 00:24:03.345 "rdma_max_cq_size": 0, 00:24:03.345 "rdma_cm_event_timeout_ms": 0, 00:24:03.345 "dhchap_digests": [ 00:24:03.345 "sha256", 00:24:03.345 "sha384", 00:24:03.345 "sha512" 00:24:03.345 ], 00:24:03.345 "dhchap_dhgroups": [ 00:24:03.345 "null", 00:24:03.345 "ffdhe2048", 00:24:03.345 "ffdhe3072", 00:24:03.345 "ffdhe4096", 00:24:03.345 "ffdhe6144", 00:24:03.345 "ffdhe8192" 00:24:03.345 ] 00:24:03.345 } 00:24:03.345 }, 00:24:03.345 { 00:24:03.345 "method": "bdev_nvme_set_hotplug", 00:24:03.345 "params": { 00:24:03.345 "period_us": 100000, 00:24:03.345 "enable": false 00:24:03.345 } 00:24:03.345 }, 00:24:03.345 { 00:24:03.345 "method": "bdev_malloc_create", 00:24:03.345 "params": { 00:24:03.345 "name": "malloc0", 00:24:03.345 "num_blocks": 8192, 00:24:03.345 "block_size": 4096, 00:24:03.345 "physical_block_size": 4096, 00:24:03.345 "uuid": "88ec4bba-6de6-48cd-9e8f-258b67d445a5", 00:24:03.345 "optimal_io_boundary": 0 00:24:03.345 } 00:24:03.345 }, 00:24:03.345 { 00:24:03.345 "method": "bdev_wait_for_examine" 00:24:03.345 } 00:24:03.345 ] 00:24:03.345 }, 00:24:03.345 { 00:24:03.345 "subsystem": "nbd", 00:24:03.345 "config": [] 00:24:03.345 }, 00:24:03.345 { 00:24:03.345 "subsystem": "scheduler", 00:24:03.345 "config": [ 00:24:03.345 { 00:24:03.345 "method": "framework_set_scheduler", 00:24:03.345 "params": { 00:24:03.345 "name": "static" 00:24:03.345 } 00:24:03.345 } 00:24:03.345 ] 00:24:03.345 }, 00:24:03.345 { 00:24:03.345 "subsystem": "nvmf", 00:24:03.345 "config": [ 00:24:03.345 { 00:24:03.345 "method": "nvmf_set_config", 00:24:03.345 "params": { 00:24:03.345 "discovery_filter": "match_any", 00:24:03.345 "admin_cmd_passthru": { 00:24:03.345 "identify_ctrlr": false 00:24:03.345 } 00:24:03.345 } 00:24:03.345 }, 00:24:03.345 { 00:24:03.345 "method": "nvmf_set_max_subsystems", 00:24:03.345 "params": { 00:24:03.345 "max_subsystems": 1024 00:24:03.345 } 00:24:03.345 }, 00:24:03.345 { 00:24:03.345 "method": "nvmf_set_crdt", 00:24:03.345 "params": { 00:24:03.345 "crdt1": 0, 00:24:03.345 "crdt2": 0, 00:24:03.345 "crdt3": 0 00:24:03.345 } 00:24:03.345 }, 00:24:03.345 { 00:24:03.345 "method": "nvmf_create_transport", 00:24:03.345 "params": { 00:24:03.345 "trtype": "TCP", 00:24:03.345 "max_queue_depth": 128, 00:24:03.345 "max_io_qpairs_per_ctrlr": 127, 00:24:03.345 "in_capsule_data_size": 4096, 00:24:03.345 "max_io_size": 131072, 00:24:03.345 "io_unit_size": 131072, 00:24:03.345 "max_aq_depth": 128, 00:24:03.345 "num_shared_buffers": 511, 00:24:03.345 "buf_cache_size": 4294967295, 00:24:03.345 "dif_insert_or_strip": false, 00:24:03.345 "zcopy": false, 00:24:03.345 "c2h_success": false, 00:24:03.345 "sock_priority": 0, 00:24:03.345 "abort_timeout_sec": 1, 00:24:03.345 "ack_timeout": 0, 00:24:03.345 "data_wr_pool_size": 0 00:24:03.345 } 00:24:03.345 }, 00:24:03.345 { 00:24:03.345 "method": "nvmf_create_subsystem", 00:24:03.345 "params": { 00:24:03.345 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.345 "allow_any_host": false, 00:24:03.345 "serial_number": "00000000000000000000", 00:24:03.345 "model_number": "SPDK bdev Controller", 00:24:03.345 "max_namespaces": 32, 00:24:03.345 "min_cntlid": 1, 00:24:03.345 "max_cntlid": 65519, 00:24:03.345 "ana_reporting": false 00:24:03.345 } 00:24:03.345 }, 00:24:03.345 { 00:24:03.345 "method": "nvmf_subsystem_add_host", 00:24:03.345 "params": { 00:24:03.345 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.345 "host": "nqn.2016-06.io.spdk:host1", 00:24:03.345 "psk": "key0" 00:24:03.345 } 00:24:03.345 }, 00:24:03.345 { 00:24:03.345 "method": "nvmf_subsystem_add_ns", 00:24:03.345 "params": { 00:24:03.345 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.345 "namespace": { 00:24:03.345 "nsid": 1, 00:24:03.345 "bdev_name": "malloc0", 00:24:03.345 "nguid": "88EC4BBA6DE648CD9E8F258B67D445A5", 00:24:03.345 "uuid": "88ec4bba-6de6-48cd-9e8f-258b67d445a5", 00:24:03.345 "no_auto_visible": false 00:24:03.345 } 00:24:03.345 } 00:24:03.345 }, 00:24:03.345 { 00:24:03.345 "method": "nvmf_subsystem_add_listener", 00:24:03.345 "params": { 00:24:03.345 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.345 "listen_address": { 00:24:03.345 "trtype": "TCP", 00:24:03.345 "adrfam": "IPv4", 00:24:03.345 "traddr": "10.0.0.2", 00:24:03.345 "trsvcid": "4420" 00:24:03.345 }, 00:24:03.345 "secure_channel": true 00:24:03.345 } 00:24:03.345 } 00:24:03.345 ] 00:24:03.345 } 00:24:03.345 ] 00:24:03.345 }' 00:24:03.345 10:32:48 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:03.605 10:32:48 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:24:03.605 "subsystems": [ 00:24:03.605 { 00:24:03.605 "subsystem": "keyring", 00:24:03.605 "config": [ 00:24:03.605 { 00:24:03.605 "method": "keyring_file_add_key", 00:24:03.605 "params": { 00:24:03.605 "name": "key0", 00:24:03.605 "path": "/tmp/tmp.v8w5xsNwG6" 00:24:03.605 } 00:24:03.605 } 00:24:03.605 ] 00:24:03.605 }, 00:24:03.605 { 00:24:03.605 "subsystem": "iobuf", 00:24:03.605 "config": [ 00:24:03.605 { 00:24:03.605 "method": "iobuf_set_options", 00:24:03.605 "params": { 00:24:03.605 "small_pool_count": 8192, 00:24:03.605 "large_pool_count": 1024, 00:24:03.605 "small_bufsize": 8192, 00:24:03.605 "large_bufsize": 135168 00:24:03.605 } 00:24:03.605 } 00:24:03.605 ] 00:24:03.605 }, 00:24:03.605 { 00:24:03.605 "subsystem": "sock", 00:24:03.605 "config": [ 00:24:03.605 { 00:24:03.605 "method": "sock_set_default_impl", 00:24:03.605 "params": { 00:24:03.605 "impl_name": "posix" 00:24:03.605 } 00:24:03.605 }, 00:24:03.605 { 00:24:03.605 "method": "sock_impl_set_options", 00:24:03.605 "params": { 00:24:03.605 "impl_name": "ssl", 00:24:03.605 "recv_buf_size": 4096, 00:24:03.605 "send_buf_size": 4096, 00:24:03.605 "enable_recv_pipe": true, 00:24:03.605 "enable_quickack": false, 00:24:03.605 "enable_placement_id": 0, 00:24:03.605 "enable_zerocopy_send_server": true, 00:24:03.605 "enable_zerocopy_send_client": false, 00:24:03.605 "zerocopy_threshold": 0, 00:24:03.605 "tls_version": 0, 00:24:03.605 "enable_ktls": false 00:24:03.605 } 00:24:03.605 }, 00:24:03.605 { 00:24:03.605 "method": "sock_impl_set_options", 00:24:03.605 "params": { 00:24:03.605 "impl_name": "posix", 00:24:03.605 "recv_buf_size": 2097152, 00:24:03.605 "send_buf_size": 2097152, 00:24:03.605 "enable_recv_pipe": true, 00:24:03.605 "enable_quickack": false, 00:24:03.605 "enable_placement_id": 0, 00:24:03.605 "enable_zerocopy_send_server": true, 00:24:03.605 "enable_zerocopy_send_client": false, 00:24:03.605 "zerocopy_threshold": 0, 00:24:03.605 "tls_version": 0, 00:24:03.605 "enable_ktls": false 00:24:03.605 } 00:24:03.605 } 00:24:03.605 ] 00:24:03.605 }, 00:24:03.605 { 00:24:03.605 "subsystem": "vmd", 00:24:03.605 "config": [] 00:24:03.605 }, 00:24:03.605 { 00:24:03.605 "subsystem": "accel", 00:24:03.605 "config": [ 00:24:03.605 { 00:24:03.605 "method": "accel_set_options", 00:24:03.605 "params": { 00:24:03.605 "small_cache_size": 128, 00:24:03.605 "large_cache_size": 16, 00:24:03.605 "task_count": 2048, 00:24:03.605 "sequence_count": 2048, 00:24:03.605 "buf_count": 2048 00:24:03.605 } 00:24:03.605 } 00:24:03.606 ] 00:24:03.606 }, 00:24:03.606 { 00:24:03.606 "subsystem": "bdev", 00:24:03.606 "config": [ 00:24:03.606 { 00:24:03.606 "method": "bdev_set_options", 00:24:03.606 "params": { 00:24:03.606 "bdev_io_pool_size": 65535, 00:24:03.606 "bdev_io_cache_size": 256, 00:24:03.606 "bdev_auto_examine": true, 00:24:03.606 "iobuf_small_cache_size": 128, 00:24:03.606 "iobuf_large_cache_size": 16 00:24:03.606 } 00:24:03.606 }, 00:24:03.606 { 00:24:03.606 "method": "bdev_raid_set_options", 00:24:03.606 "params": { 00:24:03.606 "process_window_size_kb": 1024 00:24:03.606 } 00:24:03.606 }, 00:24:03.606 { 00:24:03.606 "method": "bdev_iscsi_set_options", 00:24:03.606 "params": { 00:24:03.606 "timeout_sec": 30 00:24:03.606 } 00:24:03.606 }, 00:24:03.606 { 00:24:03.606 "method": "bdev_nvme_set_options", 00:24:03.606 "params": { 00:24:03.606 "action_on_timeout": "none", 00:24:03.606 "timeout_us": 0, 00:24:03.606 "timeout_admin_us": 0, 00:24:03.606 "keep_alive_timeout_ms": 10000, 00:24:03.606 "arbitration_burst": 0, 00:24:03.606 "low_priority_weight": 0, 00:24:03.606 "medium_priority_weight": 0, 00:24:03.606 "high_priority_weight": 0, 00:24:03.606 "nvme_adminq_poll_period_us": 10000, 00:24:03.606 "nvme_ioq_poll_period_us": 0, 00:24:03.606 "io_queue_requests": 512, 00:24:03.606 "delay_cmd_submit": true, 00:24:03.606 "transport_retry_count": 4, 00:24:03.606 "bdev_retry_count": 3, 00:24:03.606 "transport_ack_timeout": 0, 00:24:03.606 "ctrlr_loss_timeout_sec": 0, 00:24:03.606 "reconnect_delay_sec": 0, 00:24:03.606 "fast_io_fail_timeout_sec": 0, 00:24:03.606 "disable_auto_failback": false, 00:24:03.606 "generate_uuids": false, 00:24:03.606 "transport_tos": 0, 00:24:03.606 "nvme_error_stat": false, 00:24:03.606 "rdma_srq_size": 0, 00:24:03.606 "io_path_stat": false, 00:24:03.606 "allow_accel_sequence": false, 00:24:03.606 "rdma_max_cq_size": 0, 00:24:03.606 "rdma_cm_event_timeout_ms": 0, 00:24:03.606 "dhchap_digests": [ 00:24:03.606 "sha256", 00:24:03.606 "sha384", 00:24:03.606 "sha512" 00:24:03.606 ], 00:24:03.606 "dhchap_dhgroups": [ 00:24:03.606 "null", 00:24:03.606 "ffdhe2048", 00:24:03.606 "ffdhe3072", 00:24:03.606 "ffdhe4096", 00:24:03.606 "ffdhe6144", 00:24:03.606 "ffdhe8192" 00:24:03.606 ] 00:24:03.606 } 00:24:03.606 }, 00:24:03.606 { 00:24:03.606 "method": "bdev_nvme_attach_controller", 00:24:03.606 "params": { 00:24:03.606 "name": "nvme0", 00:24:03.606 "trtype": "TCP", 00:24:03.606 "adrfam": "IPv4", 00:24:03.606 "traddr": "10.0.0.2", 00:24:03.606 "trsvcid": "4420", 00:24:03.606 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.606 "prchk_reftag": false, 00:24:03.606 "prchk_guard": false, 00:24:03.606 "ctrlr_loss_timeout_sec": 0, 00:24:03.606 "reconnect_delay_sec": 0, 00:24:03.606 "fast_io_fail_timeout_sec": 0, 00:24:03.606 "psk": "key0", 00:24:03.606 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:03.606 "hdgst": false, 00:24:03.606 "ddgst": false 00:24:03.606 } 00:24:03.606 }, 00:24:03.606 { 00:24:03.606 "method": "bdev_nvme_set_hotplug", 00:24:03.606 "params": { 00:24:03.606 "period_us": 100000, 00:24:03.606 "enable": false 00:24:03.606 } 00:24:03.606 }, 00:24:03.606 { 00:24:03.606 "method": "bdev_enable_histogram", 00:24:03.606 "params": { 00:24:03.606 "name": "nvme0n1", 00:24:03.606 "enable": true 00:24:03.606 } 00:24:03.606 }, 00:24:03.606 { 00:24:03.606 "method": "bdev_wait_for_examine" 00:24:03.606 } 00:24:03.606 ] 00:24:03.606 }, 00:24:03.606 { 00:24:03.606 "subsystem": "nbd", 00:24:03.606 "config": [] 00:24:03.606 } 00:24:03.606 ] 00:24:03.606 }' 00:24:03.606 10:32:48 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 2458015 00:24:03.606 10:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2458015 ']' 00:24:03.606 10:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2458015 00:24:03.606 10:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:03.606 10:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:03.606 10:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2458015 00:24:03.606 10:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:03.606 10:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:03.606 10:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2458015' 00:24:03.606 killing process with pid 2458015 00:24:03.606 10:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2458015 00:24:03.606 Received shutdown signal, test time was about 1.000000 seconds 00:24:03.606 00:24:03.606 Latency(us) 00:24:03.606 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.606 =================================================================================================================== 00:24:03.606 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:03.606 10:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2458015 00:24:03.606 10:32:48 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 2457791 00:24:03.606 10:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2457791 ']' 00:24:03.606 10:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2457791 00:24:03.606 10:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:03.606 10:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:03.606 10:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2457791 00:24:03.867 10:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:03.867 10:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:03.867 10:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2457791' 00:24:03.867 killing process with pid 2457791 00:24:03.867 10:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2457791 00:24:03.867 10:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2457791 00:24:03.867 10:32:48 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:24:03.867 10:32:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:03.867 10:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:03.867 10:32:48 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:24:03.867 "subsystems": [ 00:24:03.867 { 00:24:03.867 "subsystem": "keyring", 00:24:03.867 "config": [ 00:24:03.867 { 00:24:03.867 "method": "keyring_file_add_key", 00:24:03.867 "params": { 00:24:03.867 "name": "key0", 00:24:03.867 "path": "/tmp/tmp.v8w5xsNwG6" 00:24:03.867 } 00:24:03.867 } 00:24:03.867 ] 00:24:03.867 }, 00:24:03.867 { 00:24:03.867 "subsystem": "iobuf", 00:24:03.867 "config": [ 00:24:03.867 { 00:24:03.867 "method": "iobuf_set_options", 00:24:03.867 "params": { 00:24:03.867 "small_pool_count": 8192, 00:24:03.867 "large_pool_count": 1024, 00:24:03.867 "small_bufsize": 8192, 00:24:03.867 "large_bufsize": 135168 00:24:03.867 } 00:24:03.867 } 00:24:03.867 ] 00:24:03.867 }, 00:24:03.867 { 00:24:03.867 "subsystem": "sock", 00:24:03.867 "config": [ 00:24:03.867 { 00:24:03.867 "method": "sock_set_default_impl", 00:24:03.867 "params": { 00:24:03.867 "impl_name": "posix" 00:24:03.867 } 00:24:03.867 }, 00:24:03.867 { 00:24:03.867 "method": "sock_impl_set_options", 00:24:03.867 "params": { 00:24:03.867 "impl_name": "ssl", 00:24:03.867 "recv_buf_size": 4096, 00:24:03.867 "send_buf_size": 4096, 00:24:03.867 "enable_recv_pipe": true, 00:24:03.867 "enable_quickack": false, 00:24:03.867 "enable_placement_id": 0, 00:24:03.867 "enable_zerocopy_send_server": true, 00:24:03.867 "enable_zerocopy_send_client": false, 00:24:03.867 "zerocopy_threshold": 0, 00:24:03.867 "tls_version": 0, 00:24:03.867 "enable_ktls": false 00:24:03.867 } 00:24:03.867 }, 00:24:03.867 { 00:24:03.867 "method": "sock_impl_set_options", 00:24:03.867 "params": { 00:24:03.867 "impl_name": "posix", 00:24:03.868 "recv_buf_size": 2097152, 00:24:03.868 "send_buf_size": 2097152, 00:24:03.868 "enable_recv_pipe": true, 00:24:03.868 "enable_quickack": false, 00:24:03.868 "enable_placement_id": 0, 00:24:03.868 "enable_zerocopy_send_server": true, 00:24:03.868 "enable_zerocopy_send_client": false, 00:24:03.868 "zerocopy_threshold": 0, 00:24:03.868 "tls_version": 0, 00:24:03.868 "enable_ktls": false 00:24:03.868 } 00:24:03.868 } 00:24:03.868 ] 00:24:03.868 }, 00:24:03.868 { 00:24:03.868 "subsystem": "vmd", 00:24:03.868 "config": [] 00:24:03.868 }, 00:24:03.868 { 00:24:03.868 "subsystem": "accel", 00:24:03.868 "config": [ 00:24:03.868 { 00:24:03.868 "method": "accel_set_options", 00:24:03.868 "params": { 00:24:03.868 "small_cache_size": 128, 00:24:03.868 "large_cache_size": 16, 00:24:03.868 "task_count": 2048, 00:24:03.868 "sequence_count": 2048, 00:24:03.868 "buf_count": 2048 00:24:03.868 } 00:24:03.868 } 00:24:03.868 ] 00:24:03.868 }, 00:24:03.868 { 00:24:03.868 "subsystem": "bdev", 00:24:03.868 "config": [ 00:24:03.868 { 00:24:03.868 "method": "bdev_set_options", 00:24:03.868 "params": { 00:24:03.868 "bdev_io_pool_size": 65535, 00:24:03.868 "bdev_io_cache_size": 256, 00:24:03.868 "bdev_auto_examine": true, 00:24:03.868 "iobuf_small_cache_size": 128, 00:24:03.868 "iobuf_large_cache_size": 16 00:24:03.868 } 00:24:03.868 }, 00:24:03.868 { 00:24:03.868 "method": "bdev_raid_set_options", 00:24:03.868 "params": { 00:24:03.868 "process_window_size_kb": 1024 00:24:03.868 } 00:24:03.868 }, 00:24:03.868 { 00:24:03.868 "method": "bdev_iscsi_set_options", 00:24:03.868 "params": { 00:24:03.868 "timeout_sec": 30 00:24:03.868 } 00:24:03.868 }, 00:24:03.868 { 00:24:03.868 "method": "bdev_nvme_set_options", 00:24:03.868 "params": { 00:24:03.868 "action_on_timeout": "none", 00:24:03.868 "timeout_us": 0, 00:24:03.868 "timeout_admin_us": 0, 00:24:03.868 "keep_alive_timeout_ms": 10000, 00:24:03.868 "arbitration_burst": 0, 00:24:03.868 "low_priority_weight": 0, 00:24:03.868 "medium_priority_weight": 0, 00:24:03.868 "high_priority_weight": 0, 00:24:03.868 "nvme_adminq_poll_period_us": 10000, 00:24:03.868 "nvme_ioq_poll_period_us": 0, 00:24:03.868 "io_queue_requests": 0, 00:24:03.868 "delay_cmd_submit": true, 00:24:03.868 "transport_retry_count": 4, 00:24:03.868 "bdev_retry_count": 3, 00:24:03.868 "transport_ack_timeout": 0, 00:24:03.868 "ctrlr_loss_timeout_sec": 0, 00:24:03.868 "reconnect_delay_sec": 0, 00:24:03.868 "fast_io_fail_timeout_sec": 0, 00:24:03.868 "disable_auto_failback": false, 00:24:03.868 "generate_uuids": false, 00:24:03.868 "transport_tos": 0, 00:24:03.868 "nvme_error_stat": false, 00:24:03.868 "rdma_srq_size": 0, 00:24:03.868 "io_path_stat": false, 00:24:03.868 "allow_accel_sequence": false, 00:24:03.868 "rdma_max_cq_size": 0, 00:24:03.868 "rdma_cm_event_timeout_ms": 0, 00:24:03.868 "dhchap_digests": [ 00:24:03.868 "sha256", 00:24:03.868 "sha384", 00:24:03.868 "sha512" 00:24:03.868 ], 00:24:03.868 "dhchap_dhgroups": [ 00:24:03.868 "null", 00:24:03.868 "ffdhe2048", 00:24:03.868 "ffdhe3072", 00:24:03.868 "ffdhe4096", 00:24:03.868 "ffdhe6144", 00:24:03.868 "ffdhe8192" 00:24:03.868 ] 00:24:03.868 } 00:24:03.868 }, 00:24:03.868 { 00:24:03.868 "method": "bdev_nvme_set_hotplug", 00:24:03.868 "params": { 00:24:03.868 "period_us": 100000, 00:24:03.868 "enable": false 00:24:03.868 } 00:24:03.868 }, 00:24:03.868 { 00:24:03.868 "method": "bdev_malloc_create", 00:24:03.868 "params": { 00:24:03.868 "name": "malloc0", 00:24:03.868 "num_blocks": 8192, 00:24:03.868 "block_size": 4096, 00:24:03.868 "physical_block_size": 4096, 00:24:03.868 "uuid": "88ec4bba-6de6-48cd-9e8f-258b67d445a5", 00:24:03.868 "optimal_io_boundary": 0 00:24:03.868 } 00:24:03.868 }, 00:24:03.868 { 00:24:03.868 "method": "bdev_wait_for_examine" 00:24:03.868 } 00:24:03.868 ] 00:24:03.868 }, 00:24:03.868 { 00:24:03.868 "subsystem": "nbd", 00:24:03.868 "config": [] 00:24:03.868 }, 00:24:03.868 { 00:24:03.868 "subsystem": "scheduler", 00:24:03.868 "config": [ 00:24:03.868 { 00:24:03.868 "method": "framework_set_scheduler", 00:24:03.868 "params": { 00:24:03.868 "name": "static" 00:24:03.868 } 00:24:03.868 } 00:24:03.868 ] 00:24:03.868 }, 00:24:03.868 { 00:24:03.868 "subsystem": "nvmf", 00:24:03.868 "config": [ 00:24:03.868 { 00:24:03.868 "method": "nvmf_set_config", 00:24:03.868 "params": { 00:24:03.868 "discovery_filter": "match_any", 00:24:03.868 "admin_cmd_passthru": { 00:24:03.868 "identify_ctrlr": false 00:24:03.868 } 00:24:03.868 } 00:24:03.868 }, 00:24:03.868 { 00:24:03.868 "method": "nvmf_set_max_subsystems", 00:24:03.868 "params": { 00:24:03.868 "max_subsystems": 1024 00:24:03.868 } 00:24:03.868 }, 00:24:03.868 { 00:24:03.868 "method": "nvmf_set_crdt", 00:24:03.868 "params": { 00:24:03.868 "crdt1": 0, 00:24:03.868 "crdt2": 0, 00:24:03.868 "crdt3": 0 00:24:03.868 } 00:24:03.868 }, 00:24:03.868 { 00:24:03.868 "method": "nvmf_create_transport", 00:24:03.868 "params": { 00:24:03.868 "trtype": "TCP", 00:24:03.868 "max_queue_depth": 128, 00:24:03.868 "max_io_qpairs_per_ctrlr": 127, 00:24:03.868 "in_capsule_data_size": 4096, 00:24:03.868 "max_io_size": 131072, 00:24:03.868 "io_unit_size": 131072, 00:24:03.868 "max_aq_depth": 128, 00:24:03.868 "num_shared_buffers": 511, 00:24:03.868 "buf_cache_size": 4294967295, 00:24:03.868 "dif_insert_or_strip": false, 00:24:03.868 "zcopy": false, 00:24:03.868 "c2h_success": false, 00:24:03.868 "sock_priority": 0, 00:24:03.868 "abort_timeout_sec": 1, 00:24:03.868 "ack_timeout": 0, 00:24:03.868 "data_wr_pool_size": 0 00:24:03.868 } 00:24:03.868 }, 00:24:03.868 { 00:24:03.868 "method": "nvmf_create_subsystem", 00:24:03.868 "params": { 00:24:03.868 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.868 "allow_any_host": false, 00:24:03.868 "serial_number": "00000000000000000000", 00:24:03.869 "model_number": "SPDK bdev Controller", 00:24:03.869 "max_namespaces": 32, 00:24:03.869 "min_cntlid": 1, 00:24:03.869 "max_cntlid": 65519, 00:24:03.869 "ana_reporting": false 00:24:03.869 } 00:24:03.869 }, 00:24:03.869 { 00:24:03.869 "method": "nvmf_subsystem_add_host", 00:24:03.869 "params": { 00:24:03.869 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.869 "host": "nqn.2016-06.io.spdk:host1", 00:24:03.869 "psk": "key0" 00:24:03.869 } 00:24:03.869 }, 00:24:03.869 { 00:24:03.869 "method": "nvmf_subsystem_add_ns", 00:24:03.869 "params": { 00:24:03.869 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.869 "namespace": { 00:24:03.869 "nsid": 1, 00:24:03.869 "bdev_name": "malloc0", 00:24:03.869 "nguid": "88EC4BBA6DE648CD9E8F258B67D445A5", 00:24:03.869 "uuid": "88ec4bba-6de6-48cd-9e8f-258b67d445a5", 00:24:03.869 "no_auto_visible": false 00:24:03.869 } 00:24:03.869 } 00:24:03.869 }, 00:24:03.869 { 00:24:03.869 "method": "nvmf_subsystem_add_listener", 00:24:03.869 "params": { 00:24:03.869 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.869 "listen_address": { 00:24:03.869 "trtype": "TCP", 00:24:03.869 "adrfam": "IPv4", 00:24:03.869 "traddr": "10.0.0.2", 00:24:03.869 "trsvcid": "4420" 00:24:03.869 }, 00:24:03.869 "secure_channel": true 00:24:03.869 } 00:24:03.869 } 00:24:03.869 ] 00:24:03.869 } 00:24:03.869 ] 00:24:03.869 }' 00:24:03.869 10:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.869 10:32:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2458332 00:24:03.869 10:32:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2458332 00:24:03.869 10:32:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:03.869 10:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2458332 ']' 00:24:03.869 10:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.869 10:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:03.869 10:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.869 10:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:03.869 10:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.869 [2024-07-14 10:32:48.829901] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:24:03.869 [2024-07-14 10:32:48.829946] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.128 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.128 [2024-07-14 10:32:48.901413] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.128 [2024-07-14 10:32:48.941416] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:04.128 [2024-07-14 10:32:48.941456] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:04.128 [2024-07-14 10:32:48.941463] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:04.128 [2024-07-14 10:32:48.941472] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:04.128 [2024-07-14 10:32:48.941477] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:04.128 [2024-07-14 10:32:48.941529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.387 [2024-07-14 10:32:49.148185] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:04.387 [2024-07-14 10:32:49.180212] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:04.387 [2024-07-14 10:32:49.193557] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:04.955 10:32:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:04.955 10:32:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:04.955 10:32:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:04.955 10:32:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:04.955 10:32:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:04.955 10:32:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:04.955 10:32:49 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=2458540 00:24:04.955 10:32:49 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 2458540 /var/tmp/bdevperf.sock 00:24:04.955 10:32:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2458540 ']' 00:24:04.955 10:32:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:04.955 10:32:49 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:04.955 10:32:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:04.955 10:32:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:04.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:04.955 10:32:49 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:24:04.955 "subsystems": [ 00:24:04.955 { 00:24:04.955 "subsystem": "keyring", 00:24:04.955 "config": [ 00:24:04.955 { 00:24:04.955 "method": "keyring_file_add_key", 00:24:04.955 "params": { 00:24:04.955 "name": "key0", 00:24:04.955 "path": "/tmp/tmp.v8w5xsNwG6" 00:24:04.955 } 00:24:04.955 } 00:24:04.955 ] 00:24:04.955 }, 00:24:04.955 { 00:24:04.955 "subsystem": "iobuf", 00:24:04.955 "config": [ 00:24:04.955 { 00:24:04.955 "method": "iobuf_set_options", 00:24:04.955 "params": { 00:24:04.955 "small_pool_count": 8192, 00:24:04.955 "large_pool_count": 1024, 00:24:04.955 "small_bufsize": 8192, 00:24:04.955 "large_bufsize": 135168 00:24:04.955 } 00:24:04.955 } 00:24:04.955 ] 00:24:04.955 }, 00:24:04.955 { 00:24:04.955 "subsystem": "sock", 00:24:04.955 "config": [ 00:24:04.955 { 00:24:04.955 "method": "sock_set_default_impl", 00:24:04.955 "params": { 00:24:04.955 "impl_name": "posix" 00:24:04.955 } 00:24:04.955 }, 00:24:04.955 { 00:24:04.955 "method": "sock_impl_set_options", 00:24:04.955 "params": { 00:24:04.955 "impl_name": "ssl", 00:24:04.955 "recv_buf_size": 4096, 00:24:04.955 "send_buf_size": 4096, 00:24:04.955 "enable_recv_pipe": true, 00:24:04.955 "enable_quickack": false, 00:24:04.955 "enable_placement_id": 0, 00:24:04.955 "enable_zerocopy_send_server": true, 00:24:04.956 "enable_zerocopy_send_client": false, 00:24:04.956 "zerocopy_threshold": 0, 00:24:04.956 "tls_version": 0, 00:24:04.956 "enable_ktls": false 00:24:04.956 } 00:24:04.956 }, 00:24:04.956 { 00:24:04.956 "method": "sock_impl_set_options", 00:24:04.956 "params": { 00:24:04.956 "impl_name": "posix", 00:24:04.956 "recv_buf_size": 2097152, 00:24:04.956 "send_buf_size": 2097152, 00:24:04.956 "enable_recv_pipe": true, 00:24:04.956 "enable_quickack": false, 00:24:04.956 "enable_placement_id": 0, 00:24:04.956 "enable_zerocopy_send_server": true, 00:24:04.956 "enable_zerocopy_send_client": false, 00:24:04.956 "zerocopy_threshold": 0, 00:24:04.956 "tls_version": 0, 00:24:04.956 "enable_ktls": false 00:24:04.956 } 00:24:04.956 } 00:24:04.956 ] 00:24:04.956 }, 00:24:04.956 { 00:24:04.956 "subsystem": "vmd", 00:24:04.956 "config": [] 00:24:04.956 }, 00:24:04.956 { 00:24:04.956 "subsystem": "accel", 00:24:04.956 "config": [ 00:24:04.956 { 00:24:04.956 "method": "accel_set_options", 00:24:04.956 "params": { 00:24:04.956 "small_cache_size": 128, 00:24:04.956 "large_cache_size": 16, 00:24:04.956 "task_count": 2048, 00:24:04.956 "sequence_count": 2048, 00:24:04.956 "buf_count": 2048 00:24:04.956 } 00:24:04.956 } 00:24:04.956 ] 00:24:04.956 }, 00:24:04.956 { 00:24:04.956 "subsystem": "bdev", 00:24:04.956 "config": [ 00:24:04.956 { 00:24:04.956 "method": "bdev_set_options", 00:24:04.956 "params": { 00:24:04.956 "bdev_io_pool_size": 65535, 00:24:04.956 "bdev_io_cache_size": 256, 00:24:04.956 "bdev_auto_examine": true, 00:24:04.956 "iobuf_small_cache_size": 128, 00:24:04.956 "iobuf_large_cache_size": 16 00:24:04.956 } 00:24:04.956 }, 00:24:04.956 { 00:24:04.956 "method": "bdev_raid_set_options", 00:24:04.956 "params": { 00:24:04.956 "process_window_size_kb": 1024 00:24:04.956 } 00:24:04.956 }, 00:24:04.956 { 00:24:04.956 "method": "bdev_iscsi_set_options", 00:24:04.956 "params": { 00:24:04.956 "timeout_sec": 30 00:24:04.956 } 00:24:04.956 }, 00:24:04.956 { 00:24:04.956 "method": "bdev_nvme_set_options", 00:24:04.956 "params": { 00:24:04.956 "action_on_timeout": "none", 00:24:04.956 "timeout_us": 0, 00:24:04.956 "timeout_admin_us": 0, 00:24:04.956 "keep_alive_timeout_ms": 10000, 00:24:04.956 "arbitration_burst": 0, 00:24:04.956 "low_priority_weight": 0, 00:24:04.956 "medium_priority_weight": 0, 00:24:04.956 "high_priority_weight": 0, 00:24:04.956 "nvme_adminq_poll_period_us": 10000, 00:24:04.956 "nvme_ioq_poll_period_us": 0, 00:24:04.956 "io_queue_requests": 512, 00:24:04.956 "delay_cmd_submit": true, 00:24:04.956 "transport_retry_count": 4, 00:24:04.956 "bdev_retry_count": 3, 00:24:04.956 "transport_ack_timeout": 0, 00:24:04.956 "ctrlr_loss_timeout_sec": 0, 00:24:04.956 "reconnect_delay_sec": 0, 00:24:04.956 "fast_io_fail_timeout_sec": 0, 00:24:04.956 "disable_auto_failback": false, 00:24:04.956 "generate_uuids": false, 00:24:04.956 "transport_tos": 0, 00:24:04.956 "nvme_error_stat": false, 00:24:04.956 "rdma_srq_size": 0, 00:24:04.956 "io_path_stat": false, 00:24:04.956 "allow_accel_sequence": false, 00:24:04.956 "rdma_max_cq_size": 0, 00:24:04.956 "rdma_cm_event_timeout_ms": 0, 00:24:04.956 "dhchap_digests": [ 00:24:04.956 "sha256", 00:24:04.956 "sha384", 00:24:04.956 "sha512" 00:24:04.956 ], 00:24:04.956 "dhchap_dhgroups": [ 00:24:04.956 "null", 00:24:04.956 "ffdhe2048", 00:24:04.956 "ffdhe3072", 00:24:04.956 "ffdhe4096", 00:24:04.956 "ffdhe6144", 00:24:04.956 "ffdhe8192" 00:24:04.956 ] 00:24:04.956 } 00:24:04.956 }, 00:24:04.956 { 00:24:04.956 "method": "bdev_nvme_attach_controller", 00:24:04.956 "params": { 00:24:04.956 "name": "nvme0", 00:24:04.956 "trtype": "TCP", 00:24:04.956 "adrfam": "IPv4", 00:24:04.956 "traddr": "10.0.0.2", 00:24:04.956 "trsvcid": "4420", 00:24:04.956 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.956 "prchk_reftag": false, 00:24:04.956 "prchk_guard": false, 00:24:04.956 "ctrlr_loss_timeout_sec": 0, 00:24:04.956 "reconnect_delay_sec": 0, 00:24:04.956 "fast_io_fail_timeout_sec": 0, 00:24:04.956 "psk": "key0", 00:24:04.956 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:04.956 "hdgst": false, 00:24:04.956 "ddgst": false 00:24:04.956 } 00:24:04.956 }, 00:24:04.956 { 00:24:04.956 "method": "bdev_nvme_set_hotplug", 00:24:04.956 "params": { 00:24:04.956 "period_us": 100000, 00:24:04.956 "enable": false 00:24:04.956 } 00:24:04.956 }, 00:24:04.956 { 00:24:04.956 "method": "bdev_enable_histogram", 00:24:04.956 "params": { 00:24:04.956 "name": "nvme0n1", 00:24:04.956 "enable": true 00:24:04.956 } 00:24:04.956 }, 00:24:04.956 { 00:24:04.956 "method": "bdev_wait_for_examine" 00:24:04.956 } 00:24:04.956 ] 00:24:04.956 }, 00:24:04.956 { 00:24:04.956 "subsystem": "nbd", 00:24:04.956 "config": [] 00:24:04.956 } 00:24:04.956 ] 00:24:04.956 }' 00:24:04.956 10:32:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:04.956 10:32:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:04.956 [2024-07-14 10:32:49.713124] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:24:04.956 [2024-07-14 10:32:49.713169] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2458540 ] 00:24:04.956 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.956 [2024-07-14 10:32:49.781078] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.956 [2024-07-14 10:32:49.820520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.215 [2024-07-14 10:32:49.967096] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:05.781 10:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:05.781 10:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:05.781 10:32:50 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:05.781 10:32:50 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:24:05.781 10:32:50 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.781 10:32:50 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:06.040 Running I/O for 1 seconds... 00:24:06.976 00:24:06.976 Latency(us) 00:24:06.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.976 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:06.976 Verification LBA range: start 0x0 length 0x2000 00:24:06.976 nvme0n1 : 1.01 5434.71 21.23 0.00 0.00 23380.78 5926.73 23592.96 00:24:06.976 =================================================================================================================== 00:24:06.976 Total : 5434.71 21.23 0.00 0.00 23380.78 5926.73 23592.96 00:24:06.976 0 00:24:06.976 10:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:24:06.976 10:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:24:06.976 10:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:06.976 10:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:24:06.976 10:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:24:06.976 10:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:24:06.976 10:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:06.976 10:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:24:06.976 10:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:24:06.976 10:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:24:06.976 10:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:06.976 nvmf_trace.0 00:24:06.976 10:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:24:06.976 10:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2458540 00:24:06.976 10:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2458540 ']' 00:24:06.976 10:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2458540 00:24:06.976 10:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:06.976 10:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:06.976 10:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2458540 00:24:07.235 10:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:07.235 10:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:07.235 10:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2458540' 00:24:07.235 killing process with pid 2458540 00:24:07.235 10:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2458540 00:24:07.235 Received shutdown signal, test time was about 1.000000 seconds 00:24:07.235 00:24:07.235 Latency(us) 00:24:07.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.235 =================================================================================================================== 00:24:07.235 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:07.235 10:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2458540 00:24:07.235 10:32:52 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:07.235 10:32:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:07.235 10:32:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:24:07.235 10:32:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:07.235 10:32:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:24:07.235 10:32:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:07.235 10:32:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:07.235 rmmod nvme_tcp 00:24:07.235 rmmod nvme_fabrics 00:24:07.235 rmmod nvme_keyring 00:24:07.235 10:32:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:07.235 10:32:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:24:07.235 10:32:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:24:07.235 10:32:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2458332 ']' 00:24:07.235 10:32:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2458332 00:24:07.235 10:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2458332 ']' 00:24:07.235 10:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2458332 00:24:07.235 10:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:07.235 10:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:07.235 10:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2458332 00:24:07.494 10:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:07.494 10:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:07.494 10:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2458332' 00:24:07.494 killing process with pid 2458332 00:24:07.494 10:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2458332 00:24:07.494 10:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2458332 00:24:07.494 10:32:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:07.494 10:32:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:07.494 10:32:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:07.494 10:32:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:07.494 10:32:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:07.494 10:32:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.494 10:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:07.494 10:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.065 10:32:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:10.065 10:32:54 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.TP7ggB6sPb /tmp/tmp.Msqv4IhFSI /tmp/tmp.v8w5xsNwG6 00:24:10.065 00:24:10.065 real 1m16.401s 00:24:10.065 user 1m54.776s 00:24:10.065 sys 0m28.647s 00:24:10.065 10:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:10.065 10:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.065 ************************************ 00:24:10.065 END TEST nvmf_tls 00:24:10.065 ************************************ 00:24:10.065 10:32:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:10.065 10:32:54 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:10.065 10:32:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:10.065 10:32:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:10.065 10:32:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:10.065 ************************************ 00:24:10.065 START TEST nvmf_fips 00:24:10.065 ************************************ 00:24:10.065 10:32:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:10.065 * Looking for test storage... 00:24:10.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:10.065 10:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:10.065 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:10.065 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.065 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.065 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.065 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.065 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.065 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.065 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.065 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.065 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.065 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.065 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:10.065 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:10.065 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.065 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:24:10.066 Error setting digest 00:24:10.066 00A23D1DE27F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:24:10.066 00A23D1DE27F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:10.066 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:10.067 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:10.067 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:10.067 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.067 10:32:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:10.067 10:32:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.067 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:10.067 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:10.067 10:32:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:24:10.067 10:32:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:16.632 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:16.632 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:16.632 Found net devices under 0000:86:00.0: cvl_0_0 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:16.632 Found net devices under 0000:86:00.1: cvl_0_1 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:16.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:16.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:24:16.632 00:24:16.632 --- 10.0.0.2 ping statistics --- 00:24:16.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.632 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:16.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:16.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:24:16.632 00:24:16.632 --- 10.0.0.1 ping statistics --- 00:24:16.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.632 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2462597 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2462597 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2462597 ']' 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:16.632 10:33:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:16.633 [2024-07-14 10:33:00.690163] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:24:16.633 [2024-07-14 10:33:00.690211] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.633 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.633 [2024-07-14 10:33:00.760020] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.633 [2024-07-14 10:33:00.800837] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.633 [2024-07-14 10:33:00.800874] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.633 [2024-07-14 10:33:00.800881] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.633 [2024-07-14 10:33:00.800887] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.633 [2024-07-14 10:33:00.800892] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.633 [2024-07-14 10:33:00.800914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.633 10:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:16.633 10:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:24:16.633 10:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:16.633 10:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:16.633 10:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:16.633 10:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.633 10:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:24:16.633 10:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:16.633 10:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:16.633 10:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:16.633 10:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:16.633 10:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:16.633 10:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:16.633 10:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:16.892 [2024-07-14 10:33:01.682851] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.892 [2024-07-14 10:33:01.698846] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:16.892 [2024-07-14 10:33:01.699003] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.892 [2024-07-14 10:33:01.727138] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:16.892 malloc0 00:24:16.892 10:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:16.892 10:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2462760 00:24:16.892 10:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2462760 /var/tmp/bdevperf.sock 00:24:16.892 10:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:16.892 10:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2462760 ']' 00:24:16.892 10:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:16.892 10:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:16.892 10:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:16.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:16.892 10:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:16.892 10:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:16.892 [2024-07-14 10:33:01.817991] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:24:16.892 [2024-07-14 10:33:01.818040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2462760 ] 00:24:16.892 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.150 [2024-07-14 10:33:01.887149] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.150 [2024-07-14 10:33:01.927382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.727 10:33:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:17.727 10:33:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:24:17.727 10:33:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:17.985 [2024-07-14 10:33:02.780508] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:17.985 [2024-07-14 10:33:02.780588] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:17.985 TLSTESTn1 00:24:17.985 10:33:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:17.985 Running I/O for 10 seconds... 00:24:30.193 00:24:30.193 Latency(us) 00:24:30.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.193 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:30.193 Verification LBA range: start 0x0 length 0x2000 00:24:30.193 TLSTESTn1 : 10.03 4248.64 16.60 0.00 0.00 30067.93 5185.89 35104.50 00:24:30.193 =================================================================================================================== 00:24:30.193 Total : 4248.64 16.60 0.00 0.00 30067.93 5185.89 35104.50 00:24:30.193 0 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:30.193 nvmf_trace.0 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2462760 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2462760 ']' 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2462760 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2462760 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2462760' 00:24:30.193 killing process with pid 2462760 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2462760 00:24:30.193 Received shutdown signal, test time was about 10.000000 seconds 00:24:30.193 00:24:30.193 Latency(us) 00:24:30.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.193 =================================================================================================================== 00:24:30.193 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:30.193 [2024-07-14 10:33:13.155773] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2462760 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:30.193 rmmod nvme_tcp 00:24:30.193 rmmod nvme_fabrics 00:24:30.193 rmmod nvme_keyring 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2462597 ']' 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2462597 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2462597 ']' 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2462597 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:24:30.193 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:30.194 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2462597 00:24:30.194 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:30.194 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:30.194 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2462597' 00:24:30.194 killing process with pid 2462597 00:24:30.194 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2462597 00:24:30.194 [2024-07-14 10:33:13.438555] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:30.194 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2462597 00:24:30.194 10:33:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:30.194 10:33:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:30.194 10:33:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:30.194 10:33:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:30.194 10:33:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:30.194 10:33:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.194 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.194 10:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.762 10:33:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:30.762 10:33:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:30.762 00:24:30.762 real 0m21.120s 00:24:30.762 user 0m21.997s 00:24:30.762 sys 0m9.996s 00:24:30.762 10:33:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:30.762 10:33:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:30.762 ************************************ 00:24:30.762 END TEST nvmf_fips 00:24:30.762 ************************************ 00:24:30.762 10:33:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:30.762 10:33:15 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:24:30.762 10:33:15 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:30.762 10:33:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:30.762 10:33:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:30.762 10:33:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:31.023 ************************************ 00:24:31.023 START TEST nvmf_fuzz 00:24:31.023 ************************************ 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:31.023 * Looking for test storage... 00:24:31.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:24:31.023 10:33:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:37.593 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:37.593 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:24:37.593 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:37.593 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:37.593 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:37.593 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:37.593 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:37.593 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:24:37.593 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:37.593 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:24:37.593 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:37.594 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:37.594 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:37.594 Found net devices under 0000:86:00.0: cvl_0_0 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:37.594 Found net devices under 0000:86:00.1: cvl_0_1 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:37.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:37.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:24:37.594 00:24:37.594 --- 10.0.0.2 ping statistics --- 00:24:37.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.594 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:37.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:37.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:24:37.594 00:24:37.594 --- 10.0.0.1 ping statistics --- 00:24:37.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.594 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2468453 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2468453 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 2468453 ']' 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:37.594 10:33:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:37.594 10:33:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:37.594 10:33:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:24:37.594 10:33:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:37.594 10:33:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.594 10:33:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:37.594 10:33:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.594 10:33:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:37.594 10:33:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.594 10:33:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:37.594 Malloc0 00:24:37.594 10:33:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.594 10:33:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:37.594 10:33:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.594 10:33:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:37.594 10:33:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.594 10:33:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:37.594 10:33:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.594 10:33:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:37.594 10:33:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.594 10:33:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:37.595 10:33:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.595 10:33:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:37.595 10:33:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.595 10:33:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:37.595 10:33:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:09.749 Fuzzing completed. Shutting down the fuzz application 00:25:09.749 00:25:09.749 Dumping successful admin opcodes: 00:25:09.749 8, 9, 10, 24, 00:25:09.749 Dumping successful io opcodes: 00:25:09.749 0, 9, 00:25:09.749 NS: 0x200003aeff00 I/O qp, Total commands completed: 896898, total successful commands: 5225, random_seed: 4103573184 00:25:09.749 NS: 0x200003aeff00 admin qp, Total commands completed: 91354, total successful commands: 736, random_seed: 367378112 00:25:09.749 10:33:52 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:09.749 Fuzzing completed. Shutting down the fuzz application 00:25:09.749 00:25:09.749 Dumping successful admin opcodes: 00:25:09.749 24, 00:25:09.749 Dumping successful io opcodes: 00:25:09.749 00:25:09.749 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1860253592 00:25:09.749 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1860331674 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:09.749 rmmod nvme_tcp 00:25:09.749 rmmod nvme_fabrics 00:25:09.749 rmmod nvme_keyring 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 2468453 ']' 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 2468453 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 2468453 ']' 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 2468453 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2468453 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2468453' 00:25:09.749 killing process with pid 2468453 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 2468453 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 2468453 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:09.749 10:33:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:09.750 10:33:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.750 10:33:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:09.750 10:33:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.657 10:33:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:11.657 10:33:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:11.657 00:25:11.657 real 0m40.752s 00:25:11.657 user 0m52.990s 00:25:11.657 sys 0m17.376s 00:25:11.657 10:33:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:11.657 10:33:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:11.657 ************************************ 00:25:11.657 END TEST nvmf_fuzz 00:25:11.657 ************************************ 00:25:11.657 10:33:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:11.657 10:33:56 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:11.657 10:33:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:11.657 10:33:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:11.657 10:33:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:11.657 ************************************ 00:25:11.657 START TEST nvmf_multiconnection 00:25:11.657 ************************************ 00:25:11.657 10:33:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:11.917 * Looking for test storage... 00:25:11.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:11.917 10:33:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.917 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:11.917 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.917 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.917 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.917 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.917 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.917 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.917 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.917 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.917 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.917 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.917 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:11.917 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:11.917 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.917 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.917 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.917 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.917 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:11.917 10:33:56 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.917 10:33:56 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.917 10:33:56 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.917 10:33:56 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.917 10:33:56 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.917 10:33:56 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.918 10:33:56 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:11.918 10:33:56 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.918 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:25:11.918 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:11.918 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:11.918 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.918 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.918 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.918 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:11.918 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:11.918 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:11.918 10:33:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:11.918 10:33:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:11.918 10:33:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:11.918 10:33:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:11.918 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:11.918 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.918 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:11.918 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:11.918 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:11.918 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.918 10:33:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:11.918 10:33:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.918 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:11.918 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:11.918 10:33:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:25:11.918 10:33:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.490 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:18.490 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:18.491 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:18.491 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:18.491 Found net devices under 0000:86:00.0: cvl_0_0 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:18.491 Found net devices under 0000:86:00.1: cvl_0_1 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:18.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:18.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:25:18.491 00:25:18.491 --- 10.0.0.2 ping statistics --- 00:25:18.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.491 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:18.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:18.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:25:18.491 00:25:18.491 --- 10.0.0.1 ping statistics --- 00:25:18.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.491 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=2477213 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 2477213 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 2477213 ']' 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:18.491 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.491 [2024-07-14 10:34:02.541401] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:25:18.491 [2024-07-14 10:34:02.541444] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.491 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.491 [2024-07-14 10:34:02.596967] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:18.491 [2024-07-14 10:34:02.640139] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:18.492 [2024-07-14 10:34:02.640173] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:18.492 [2024-07-14 10:34:02.640181] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:18.492 [2024-07-14 10:34:02.640189] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:18.492 [2024-07-14 10:34:02.640195] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:18.492 [2024-07-14 10:34:02.640257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.492 [2024-07-14 10:34:02.644242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:18.492 [2024-07-14 10:34:02.644274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.492 [2024-07-14 10:34:02.644274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 [2024-07-14 10:34:02.797410] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 Malloc1 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 [2024-07-14 10:34:02.853356] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 Malloc2 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 Malloc3 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 Malloc4 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.492 10:34:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 Malloc5 00:25:18.492 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.492 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:18.492 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.492 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.492 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:18.492 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.492 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.492 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:18.492 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.492 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.492 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:18.492 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:18.492 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.492 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 Malloc6 00:25:18.492 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.492 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:18.492 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.492 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.493 Malloc7 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.493 Malloc8 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.493 Malloc9 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.493 Malloc10 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.493 Malloc11 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:18.493 10:34:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:19.428 10:34:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:19.429 10:34:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:19.429 10:34:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:19.429 10:34:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:19.429 10:34:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:21.964 10:34:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:21.964 10:34:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:21.964 10:34:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:25:21.964 10:34:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:21.964 10:34:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:21.964 10:34:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:21.964 10:34:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.964 10:34:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:22.901 10:34:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:22.901 10:34:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:22.901 10:34:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:22.901 10:34:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:22.901 10:34:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:24.803 10:34:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:24.803 10:34:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:24.803 10:34:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:25:24.803 10:34:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:24.803 10:34:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:24.803 10:34:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:24.803 10:34:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:24.803 10:34:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:26.180 10:34:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:26.180 10:34:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:26.180 10:34:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:26.180 10:34:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:26.180 10:34:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:28.111 10:34:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:28.111 10:34:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:28.111 10:34:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:25:28.111 10:34:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:28.111 10:34:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:28.111 10:34:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:28.111 10:34:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.111 10:34:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:29.487 10:34:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:29.487 10:34:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:29.487 10:34:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:29.487 10:34:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:29.487 10:34:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:31.391 10:34:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:31.392 10:34:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:31.392 10:34:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:25:31.392 10:34:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:31.392 10:34:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:31.392 10:34:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:31.392 10:34:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:31.392 10:34:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:32.769 10:34:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:32.769 10:34:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:32.769 10:34:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:32.769 10:34:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:32.769 10:34:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:34.732 10:34:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:34.732 10:34:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:34.732 10:34:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:25:34.732 10:34:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:34.732 10:34:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:34.732 10:34:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:34.732 10:34:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:34.732 10:34:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:35.668 10:34:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:35.668 10:34:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:35.668 10:34:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:35.668 10:34:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:35.668 10:34:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:38.203 10:34:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:38.203 10:34:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:38.203 10:34:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:25:38.203 10:34:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:38.203 10:34:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:38.203 10:34:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:38.203 10:34:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:38.203 10:34:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:39.137 10:34:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:39.137 10:34:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:39.137 10:34:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:39.137 10:34:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:39.137 10:34:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:41.038 10:34:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:41.038 10:34:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:41.038 10:34:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:25:41.038 10:34:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:41.038 10:34:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:41.038 10:34:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:41.038 10:34:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:41.038 10:34:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:42.452 10:34:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:42.452 10:34:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:42.452 10:34:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:42.452 10:34:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:42.452 10:34:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:44.351 10:34:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:44.351 10:34:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:44.351 10:34:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:25:44.609 10:34:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:44.609 10:34:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:44.609 10:34:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:44.609 10:34:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.609 10:34:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:45.984 10:34:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:45.984 10:34:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:45.984 10:34:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:45.984 10:34:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:45.984 10:34:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:47.905 10:34:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:47.905 10:34:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:47.905 10:34:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:25:47.905 10:34:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:47.905 10:34:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:47.905 10:34:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:47.905 10:34:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.905 10:34:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:49.279 10:34:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:49.279 10:34:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:49.279 10:34:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:49.279 10:34:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:49.279 10:34:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:51.183 10:34:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:51.183 10:34:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:51.183 10:34:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:25:51.183 10:34:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:51.183 10:34:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:51.183 10:34:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:51.183 10:34:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.183 10:34:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:52.561 10:34:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:52.561 10:34:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:52.561 10:34:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:52.561 10:34:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:52.561 10:34:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:54.469 10:34:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:54.469 10:34:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:54.469 10:34:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:25:54.469 10:34:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:54.469 10:34:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:54.469 10:34:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:54.469 10:34:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:54.728 [global] 00:25:54.728 thread=1 00:25:54.728 invalidate=1 00:25:54.728 rw=read 00:25:54.728 time_based=1 00:25:54.728 runtime=10 00:25:54.728 ioengine=libaio 00:25:54.728 direct=1 00:25:54.728 bs=262144 00:25:54.728 iodepth=64 00:25:54.728 norandommap=1 00:25:54.728 numjobs=1 00:25:54.728 00:25:54.728 [job0] 00:25:54.728 filename=/dev/nvme0n1 00:25:54.728 [job1] 00:25:54.728 filename=/dev/nvme10n1 00:25:54.728 [job2] 00:25:54.728 filename=/dev/nvme1n1 00:25:54.728 [job3] 00:25:54.728 filename=/dev/nvme2n1 00:25:54.728 [job4] 00:25:54.728 filename=/dev/nvme3n1 00:25:54.728 [job5] 00:25:54.728 filename=/dev/nvme4n1 00:25:54.728 [job6] 00:25:54.728 filename=/dev/nvme5n1 00:25:54.728 [job7] 00:25:54.728 filename=/dev/nvme6n1 00:25:54.728 [job8] 00:25:54.728 filename=/dev/nvme7n1 00:25:54.728 [job9] 00:25:54.728 filename=/dev/nvme8n1 00:25:54.728 [job10] 00:25:54.728 filename=/dev/nvme9n1 00:25:54.728 Could not set queue depth (nvme0n1) 00:25:54.728 Could not set queue depth (nvme10n1) 00:25:54.728 Could not set queue depth (nvme1n1) 00:25:54.728 Could not set queue depth (nvme2n1) 00:25:54.728 Could not set queue depth (nvme3n1) 00:25:54.728 Could not set queue depth (nvme4n1) 00:25:54.728 Could not set queue depth (nvme5n1) 00:25:54.728 Could not set queue depth (nvme6n1) 00:25:54.728 Could not set queue depth (nvme7n1) 00:25:54.728 Could not set queue depth (nvme8n1) 00:25:54.728 Could not set queue depth (nvme9n1) 00:25:54.988 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.988 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.988 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.988 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.988 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.988 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.988 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.988 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.988 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.988 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.988 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.988 fio-3.35 00:25:54.988 Starting 11 threads 00:26:07.206 00:26:07.206 job0: (groupid=0, jobs=1): err= 0: pid=2483663: Sun Jul 14 10:34:50 2024 00:26:07.206 read: IOPS=679, BW=170MiB/s (178MB/s)(1713MiB/10078msec) 00:26:07.206 slat (usec): min=10, max=96927, avg=1137.27, stdev=3955.38 00:26:07.206 clat (usec): min=1324, max=215029, avg=92912.86, stdev=34910.13 00:26:07.206 lat (usec): min=1351, max=215060, avg=94050.13, stdev=35423.01 00:26:07.206 clat percentiles (msec): 00:26:07.206 | 1.00th=[ 7], 5.00th=[ 24], 10.00th=[ 41], 20.00th=[ 66], 00:26:07.206 | 30.00th=[ 82], 40.00th=[ 91], 50.00th=[ 99], 60.00th=[ 105], 00:26:07.206 | 70.00th=[ 113], 80.00th=[ 124], 90.00th=[ 134], 95.00th=[ 142], 00:26:07.206 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 174], 99.95th=[ 178], 00:26:07.206 | 99.99th=[ 215] 00:26:07.206 bw ( KiB/s): min=118784, max=273920, per=7.74%, avg=173752.95, stdev=43457.34, samples=20 00:26:07.206 iops : min= 464, max= 1070, avg=678.70, stdev=169.74, samples=20 00:26:07.206 lat (msec) : 2=0.10%, 4=0.20%, 10=1.33%, 20=2.36%, 50=10.41% 00:26:07.206 lat (msec) : 100=39.11%, 250=46.48% 00:26:07.206 cpu : usr=0.32%, sys=2.38%, ctx=1475, majf=0, minf=4097 00:26:07.206 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:26:07.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.206 issued rwts: total=6852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.206 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.206 job1: (groupid=0, jobs=1): err= 0: pid=2483664: Sun Jul 14 10:34:50 2024 00:26:07.206 read: IOPS=578, BW=145MiB/s (152MB/s)(1458MiB/10073msec) 00:26:07.206 slat (usec): min=14, max=77721, avg=1258.73, stdev=4527.68 00:26:07.206 clat (usec): min=1149, max=229844, avg=109209.33, stdev=36503.50 00:26:07.206 lat (usec): min=1179, max=229875, avg=110468.05, stdev=37099.93 00:26:07.206 clat percentiles (msec): 00:26:07.206 | 1.00th=[ 9], 5.00th=[ 34], 10.00th=[ 53], 20.00th=[ 87], 00:26:07.206 | 30.00th=[ 100], 40.00th=[ 108], 50.00th=[ 115], 60.00th=[ 124], 00:26:07.206 | 70.00th=[ 131], 80.00th=[ 138], 90.00th=[ 148], 95.00th=[ 157], 00:26:07.206 | 99.00th=[ 184], 99.50th=[ 197], 99.90th=[ 207], 99.95th=[ 226], 00:26:07.206 | 99.99th=[ 230] 00:26:07.206 bw ( KiB/s): min=116224, max=225280, per=6.57%, avg=147594.45, stdev=29039.07, samples=20 00:26:07.206 iops : min= 454, max= 880, avg=576.50, stdev=113.43, samples=20 00:26:07.206 lat (msec) : 2=0.07%, 4=0.03%, 10=1.23%, 20=2.01%, 50=5.54% 00:26:07.206 lat (msec) : 100=22.18%, 250=68.94% 00:26:07.206 cpu : usr=0.21%, sys=2.36%, ctx=1381, majf=0, minf=3347 00:26:07.206 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:26:07.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.206 issued rwts: total=5830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.206 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.206 job2: (groupid=0, jobs=1): err= 0: pid=2483667: Sun Jul 14 10:34:50 2024 00:26:07.206 read: IOPS=873, BW=218MiB/s (229MB/s)(2200MiB/10077msec) 00:26:07.206 slat (usec): min=8, max=49314, avg=611.29, stdev=2615.97 00:26:07.206 clat (usec): min=690, max=201648, avg=72586.33, stdev=39602.46 00:26:07.206 lat (usec): min=718, max=201678, avg=73197.62, stdev=39856.72 00:26:07.206 clat percentiles (msec): 00:26:07.206 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 9], 20.00th=[ 34], 00:26:07.206 | 30.00th=[ 58], 40.00th=[ 68], 50.00th=[ 73], 60.00th=[ 84], 00:26:07.206 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 122], 95.00th=[ 133], 00:26:07.206 | 99.00th=[ 165], 99.50th=[ 176], 99.90th=[ 199], 99.95th=[ 203], 00:26:07.206 | 99.99th=[ 203] 00:26:07.206 bw ( KiB/s): min=134656, max=538112, per=9.96%, avg=223667.25, stdev=84652.11, samples=20 00:26:07.206 iops : min= 526, max= 2102, avg=873.70, stdev=330.67, samples=20 00:26:07.206 lat (usec) : 750=0.06%, 1000=0.42% 00:26:07.206 lat (msec) : 2=0.51%, 4=2.99%, 10=6.74%, 20=3.75%, 50=11.48% 00:26:07.206 lat (msec) : 100=48.16%, 250=25.89% 00:26:07.206 cpu : usr=0.28%, sys=3.11%, ctx=2153, majf=0, minf=4097 00:26:07.206 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:07.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.206 issued rwts: total=8801,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.206 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.206 job3: (groupid=0, jobs=1): err= 0: pid=2483670: Sun Jul 14 10:34:50 2024 00:26:07.206 read: IOPS=880, BW=220MiB/s (231MB/s)(2216MiB/10071msec) 00:26:07.206 slat (usec): min=11, max=54424, avg=946.21, stdev=3087.42 00:26:07.206 clat (msec): min=2, max=200, avg=71.70, stdev=32.00 00:26:07.206 lat (msec): min=2, max=206, avg=72.65, stdev=32.42 00:26:07.206 clat percentiles (msec): 00:26:07.206 | 1.00th=[ 5], 5.00th=[ 17], 10.00th=[ 30], 20.00th=[ 45], 00:26:07.206 | 30.00th=[ 53], 40.00th=[ 63], 50.00th=[ 72], 60.00th=[ 81], 00:26:07.206 | 70.00th=[ 90], 80.00th=[ 100], 90.00th=[ 112], 95.00th=[ 123], 00:26:07.206 | 99.00th=[ 148], 99.50th=[ 155], 99.90th=[ 184], 99.95th=[ 184], 00:26:07.206 | 99.99th=[ 201] 00:26:07.206 bw ( KiB/s): min=135680, max=397312, per=10.03%, avg=225261.40, stdev=76035.32, samples=20 00:26:07.206 iops : min= 530, max= 1552, avg=879.85, stdev=297.05, samples=20 00:26:07.206 lat (msec) : 4=0.73%, 10=2.35%, 20=2.84%, 50=20.22%, 100=54.80% 00:26:07.206 lat (msec) : 250=19.06% 00:26:07.206 cpu : usr=0.34%, sys=3.33%, ctx=1747, majf=0, minf=4097 00:26:07.206 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:07.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.206 issued rwts: total=8863,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.206 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.206 job4: (groupid=0, jobs=1): err= 0: pid=2483671: Sun Jul 14 10:34:50 2024 00:26:07.206 read: IOPS=876, BW=219MiB/s (230MB/s)(2208MiB/10073msec) 00:26:07.206 slat (usec): min=10, max=91253, avg=626.95, stdev=3187.02 00:26:07.206 clat (usec): min=981, max=190333, avg=72305.17, stdev=43605.42 00:26:07.206 lat (usec): min=1009, max=213503, avg=72932.13, stdev=43979.57 00:26:07.207 clat percentiles (msec): 00:26:07.207 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 17], 20.00th=[ 28], 00:26:07.207 | 30.00th=[ 41], 40.00th=[ 59], 50.00th=[ 70], 60.00th=[ 81], 00:26:07.207 | 70.00th=[ 96], 80.00th=[ 118], 90.00th=[ 136], 95.00th=[ 146], 00:26:07.207 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 184], 99.95th=[ 184], 00:26:07.207 | 99.99th=[ 190] 00:26:07.207 bw ( KiB/s): min=108544, max=515584, per=10.00%, avg=224441.45, stdev=83773.28, samples=20 00:26:07.207 iops : min= 424, max= 2014, avg=876.70, stdev=327.25, samples=20 00:26:07.207 lat (usec) : 1000=0.01% 00:26:07.207 lat (msec) : 2=0.44%, 4=1.02%, 10=4.82%, 20=5.59%, 50=22.76% 00:26:07.207 lat (msec) : 100=37.88%, 250=27.47% 00:26:07.207 cpu : usr=0.36%, sys=3.04%, ctx=2022, majf=0, minf=4097 00:26:07.207 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:07.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.207 issued rwts: total=8831,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.207 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.207 job5: (groupid=0, jobs=1): err= 0: pid=2483672: Sun Jul 14 10:34:50 2024 00:26:07.207 read: IOPS=590, BW=148MiB/s (155MB/s)(1486MiB/10064msec) 00:26:07.207 slat (usec): min=10, max=86104, avg=1431.31, stdev=4649.93 00:26:07.207 clat (usec): min=660, max=189227, avg=106812.65, stdev=36776.60 00:26:07.207 lat (usec): min=694, max=193303, avg=108243.96, stdev=37465.32 00:26:07.207 clat percentiles (msec): 00:26:07.207 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 47], 20.00th=[ 82], 00:26:07.207 | 30.00th=[ 97], 40.00th=[ 107], 50.00th=[ 115], 60.00th=[ 123], 00:26:07.207 | 70.00th=[ 130], 80.00th=[ 136], 90.00th=[ 146], 95.00th=[ 155], 00:26:07.207 | 99.00th=[ 169], 99.50th=[ 174], 99.90th=[ 184], 99.95th=[ 186], 00:26:07.207 | 99.99th=[ 190] 00:26:07.207 bw ( KiB/s): min=104960, max=301568, per=6.71%, avg=150538.35, stdev=45140.53, samples=20 00:26:07.207 iops : min= 410, max= 1178, avg=588.00, stdev=176.33, samples=20 00:26:07.207 lat (usec) : 750=0.05%, 1000=0.03% 00:26:07.207 lat (msec) : 2=0.08%, 4=0.07%, 10=1.53%, 20=2.47%, 50=6.51% 00:26:07.207 lat (msec) : 100=21.70%, 250=67.55% 00:26:07.207 cpu : usr=0.23%, sys=2.52%, ctx=1263, majf=0, minf=4097 00:26:07.207 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:26:07.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.207 issued rwts: total=5944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.207 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.207 job6: (groupid=0, jobs=1): err= 0: pid=2483673: Sun Jul 14 10:34:50 2024 00:26:07.207 read: IOPS=895, BW=224MiB/s (235MB/s)(2256MiB/10077msec) 00:26:07.207 slat (usec): min=10, max=62710, avg=756.68, stdev=2948.07 00:26:07.207 clat (usec): min=987, max=183035, avg=70647.02, stdev=38977.25 00:26:07.207 lat (usec): min=1028, max=206028, avg=71403.69, stdev=39359.68 00:26:07.207 clat percentiles (msec): 00:26:07.207 | 1.00th=[ 3], 5.00th=[ 11], 10.00th=[ 20], 20.00th=[ 36], 00:26:07.207 | 30.00th=[ 46], 40.00th=[ 55], 50.00th=[ 67], 60.00th=[ 79], 00:26:07.207 | 70.00th=[ 94], 80.00th=[ 107], 90.00th=[ 126], 95.00th=[ 138], 00:26:07.207 | 99.00th=[ 163], 99.50th=[ 167], 99.90th=[ 178], 99.95th=[ 182], 00:26:07.207 | 99.99th=[ 184] 00:26:07.207 bw ( KiB/s): min=122368, max=353280, per=10.22%, avg=229353.45, stdev=61424.40, samples=20 00:26:07.207 iops : min= 478, max= 1380, avg=895.90, stdev=239.94, samples=20 00:26:07.207 lat (usec) : 1000=0.01% 00:26:07.207 lat (msec) : 2=0.40%, 4=1.49%, 10=2.91%, 20=5.41%, 50=23.93% 00:26:07.207 lat (msec) : 100=41.39%, 250=24.46% 00:26:07.207 cpu : usr=0.32%, sys=3.31%, ctx=1958, majf=0, minf=4097 00:26:07.207 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:07.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.207 issued rwts: total=9023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.207 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.207 job7: (groupid=0, jobs=1): err= 0: pid=2483674: Sun Jul 14 10:34:50 2024 00:26:07.207 read: IOPS=667, BW=167MiB/s (175MB/s)(1679MiB/10070msec) 00:26:07.207 slat (usec): min=9, max=52367, avg=1002.93, stdev=3651.49 00:26:07.207 clat (usec): min=695, max=188722, avg=94856.80, stdev=40365.61 00:26:07.207 lat (usec): min=722, max=188753, avg=95859.74, stdev=40982.26 00:26:07.207 clat percentiles (msec): 00:26:07.207 | 1.00th=[ 3], 5.00th=[ 14], 10.00th=[ 36], 20.00th=[ 57], 00:26:07.207 | 30.00th=[ 73], 40.00th=[ 92], 50.00th=[ 104], 60.00th=[ 115], 00:26:07.207 | 70.00th=[ 124], 80.00th=[ 132], 90.00th=[ 140], 95.00th=[ 146], 00:26:07.207 | 99.00th=[ 161], 99.50th=[ 171], 99.90th=[ 184], 99.95th=[ 184], 00:26:07.207 | 99.99th=[ 188] 00:26:07.207 bw ( KiB/s): min=112640, max=276480, per=7.59%, avg=170345.55, stdev=48435.79, samples=20 00:26:07.207 iops : min= 440, max= 1080, avg=665.35, stdev=189.19, samples=20 00:26:07.207 lat (usec) : 750=0.01%, 1000=0.07% 00:26:07.207 lat (msec) : 2=0.22%, 4=1.61%, 10=2.25%, 20=2.74%, 50=9.01% 00:26:07.207 lat (msec) : 100=30.95%, 250=53.13% 00:26:07.207 cpu : usr=0.21%, sys=2.23%, ctx=1573, majf=0, minf=4097 00:26:07.207 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:26:07.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.207 issued rwts: total=6717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.207 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.207 job8: (groupid=0, jobs=1): err= 0: pid=2483675: Sun Jul 14 10:34:50 2024 00:26:07.207 read: IOPS=701, BW=175MiB/s (184MB/s)(1769MiB/10083msec) 00:26:07.207 slat (usec): min=10, max=121176, avg=1063.69, stdev=3849.72 00:26:07.207 clat (usec): min=1805, max=217458, avg=90058.10, stdev=44121.58 00:26:07.207 lat (usec): min=1940, max=217488, avg=91121.79, stdev=44693.24 00:26:07.207 clat percentiles (msec): 00:26:07.207 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 25], 20.00th=[ 37], 00:26:07.207 | 30.00th=[ 69], 40.00th=[ 86], 50.00th=[ 97], 60.00th=[ 109], 00:26:07.207 | 70.00th=[ 123], 80.00th=[ 132], 90.00th=[ 140], 95.00th=[ 150], 00:26:07.207 | 99.00th=[ 174], 99.50th=[ 186], 99.90th=[ 201], 99.95th=[ 209], 00:26:07.207 | 99.99th=[ 218] 00:26:07.207 bw ( KiB/s): min=108544, max=447488, per=8.00%, avg=179487.25, stdev=83199.94, samples=20 00:26:07.207 iops : min= 424, max= 1748, avg=701.10, stdev=324.99, samples=20 00:26:07.207 lat (msec) : 2=0.04%, 4=0.28%, 10=1.71%, 20=4.66%, 50=17.14% 00:26:07.207 lat (msec) : 100=29.02%, 250=47.14% 00:26:07.207 cpu : usr=0.25%, sys=2.29%, ctx=1549, majf=0, minf=4097 00:26:07.207 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:26:07.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.207 issued rwts: total=7075,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.207 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.207 job9: (groupid=0, jobs=1): err= 0: pid=2483676: Sun Jul 14 10:34:50 2024 00:26:07.207 read: IOPS=1232, BW=308MiB/s (323MB/s)(3101MiB/10066msec) 00:26:07.207 slat (usec): min=9, max=72329, avg=774.68, stdev=2603.31 00:26:07.207 clat (msec): min=2, max=191, avg=51.11, stdev=32.14 00:26:07.207 lat (msec): min=2, max=192, avg=51.88, stdev=32.61 00:26:07.207 clat percentiles (msec): 00:26:07.207 | 1.00th=[ 9], 5.00th=[ 25], 10.00th=[ 26], 20.00th=[ 28], 00:26:07.207 | 30.00th=[ 29], 40.00th=[ 31], 50.00th=[ 36], 60.00th=[ 49], 00:26:07.207 | 70.00th=[ 59], 80.00th=[ 73], 90.00th=[ 103], 95.00th=[ 125], 00:26:07.207 | 99.00th=[ 148], 99.50th=[ 157], 99.90th=[ 167], 99.95th=[ 190], 00:26:07.207 | 99.99th=[ 192] 00:26:07.207 bw ( KiB/s): min=115430, max=592896, per=14.07%, avg=315899.85, stdev=139583.47, samples=20 00:26:07.207 iops : min= 450, max= 2316, avg=1233.90, stdev=545.36, samples=20 00:26:07.207 lat (msec) : 4=0.31%, 10=0.90%, 20=1.14%, 50=59.73%, 100=27.21% 00:26:07.207 lat (msec) : 250=10.71% 00:26:07.207 cpu : usr=0.45%, sys=3.98%, ctx=2248, majf=0, minf=4097 00:26:07.207 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:26:07.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.207 issued rwts: total=12403,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.207 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.207 job10: (groupid=0, jobs=1): err= 0: pid=2483677: Sun Jul 14 10:34:50 2024 00:26:07.207 read: IOPS=802, BW=201MiB/s (210MB/s)(2020MiB/10065msec) 00:26:07.207 slat (usec): min=10, max=46961, avg=1050.15, stdev=3162.71 00:26:07.207 clat (msec): min=2, max=173, avg=78.61, stdev=29.63 00:26:07.207 lat (msec): min=2, max=182, avg=79.66, stdev=29.93 00:26:07.207 clat percentiles (msec): 00:26:07.207 | 1.00th=[ 23], 5.00th=[ 28], 10.00th=[ 41], 20.00th=[ 52], 00:26:07.207 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 80], 60.00th=[ 88], 00:26:07.207 | 70.00th=[ 95], 80.00th=[ 105], 90.00th=[ 117], 95.00th=[ 127], 00:26:07.207 | 99.00th=[ 146], 99.50th=[ 153], 99.90th=[ 169], 99.95th=[ 169], 00:26:07.207 | 99.99th=[ 174] 00:26:07.207 bw ( KiB/s): min=137728, max=438784, per=9.14%, avg=205213.15, stdev=70412.69, samples=20 00:26:07.207 iops : min= 538, max= 1714, avg=801.55, stdev=275.09, samples=20 00:26:07.207 lat (msec) : 4=0.01%, 10=0.27%, 20=0.53%, 50=18.05%, 100=57.30% 00:26:07.207 lat (msec) : 250=23.84% 00:26:07.207 cpu : usr=0.29%, sys=2.94%, ctx=1683, majf=0, minf=4097 00:26:07.207 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:07.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.207 issued rwts: total=8079,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.207 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.207 00:26:07.207 Run status group 0 (all jobs): 00:26:07.207 READ: bw=2192MiB/s (2299MB/s), 145MiB/s-308MiB/s (152MB/s-323MB/s), io=21.6GiB (23.2GB), run=10064-10083msec 00:26:07.207 00:26:07.207 Disk stats (read/write): 00:26:07.207 nvme0n1: ios=13501/0, merge=0/0, ticks=1236645/0, in_queue=1236645, util=97.20% 00:26:07.207 nvme10n1: ios=11453/0, merge=0/0, ticks=1232419/0, in_queue=1232419, util=97.40% 00:26:07.207 nvme1n1: ios=17371/0, merge=0/0, ticks=1242011/0, in_queue=1242011, util=97.65% 00:26:07.207 nvme2n1: ios=17518/0, merge=0/0, ticks=1231048/0, in_queue=1231048, util=97.80% 00:26:07.207 nvme3n1: ios=17455/0, merge=0/0, ticks=1241070/0, in_queue=1241070, util=97.88% 00:26:07.207 nvme4n1: ios=11675/0, merge=0/0, ticks=1233482/0, in_queue=1233482, util=98.22% 00:26:07.207 nvme5n1: ios=17800/0, merge=0/0, ticks=1239516/0, in_queue=1239516, util=98.37% 00:26:07.207 nvme6n1: ios=13224/0, merge=0/0, ticks=1237896/0, in_queue=1237896, util=98.49% 00:26:07.207 nvme7n1: ios=13922/0, merge=0/0, ticks=1237128/0, in_queue=1237128, util=98.94% 00:26:07.207 nvme8n1: ios=24589/0, merge=0/0, ticks=1234869/0, in_queue=1234869, util=99.12% 00:26:07.208 nvme9n1: ios=15942/0, merge=0/0, ticks=1237431/0, in_queue=1237431, util=99.24% 00:26:07.208 10:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:07.208 [global] 00:26:07.208 thread=1 00:26:07.208 invalidate=1 00:26:07.208 rw=randwrite 00:26:07.208 time_based=1 00:26:07.208 runtime=10 00:26:07.208 ioengine=libaio 00:26:07.208 direct=1 00:26:07.208 bs=262144 00:26:07.208 iodepth=64 00:26:07.208 norandommap=1 00:26:07.208 numjobs=1 00:26:07.208 00:26:07.208 [job0] 00:26:07.208 filename=/dev/nvme0n1 00:26:07.208 [job1] 00:26:07.208 filename=/dev/nvme10n1 00:26:07.208 [job2] 00:26:07.208 filename=/dev/nvme1n1 00:26:07.208 [job3] 00:26:07.208 filename=/dev/nvme2n1 00:26:07.208 [job4] 00:26:07.208 filename=/dev/nvme3n1 00:26:07.208 [job5] 00:26:07.208 filename=/dev/nvme4n1 00:26:07.208 [job6] 00:26:07.208 filename=/dev/nvme5n1 00:26:07.208 [job7] 00:26:07.208 filename=/dev/nvme6n1 00:26:07.208 [job8] 00:26:07.208 filename=/dev/nvme7n1 00:26:07.208 [job9] 00:26:07.208 filename=/dev/nvme8n1 00:26:07.208 [job10] 00:26:07.208 filename=/dev/nvme9n1 00:26:07.208 Could not set queue depth (nvme0n1) 00:26:07.208 Could not set queue depth (nvme10n1) 00:26:07.208 Could not set queue depth (nvme1n1) 00:26:07.208 Could not set queue depth (nvme2n1) 00:26:07.208 Could not set queue depth (nvme3n1) 00:26:07.208 Could not set queue depth (nvme4n1) 00:26:07.208 Could not set queue depth (nvme5n1) 00:26:07.208 Could not set queue depth (nvme6n1) 00:26:07.208 Could not set queue depth (nvme7n1) 00:26:07.208 Could not set queue depth (nvme8n1) 00:26:07.208 Could not set queue depth (nvme9n1) 00:26:07.208 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.208 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.208 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.208 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.208 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.208 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.208 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.208 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.208 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.208 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.208 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.208 fio-3.35 00:26:07.208 Starting 11 threads 00:26:17.211 00:26:17.211 job0: (groupid=0, jobs=1): err= 0: pid=2485207: Sun Jul 14 10:35:01 2024 00:26:17.211 write: IOPS=566, BW=142MiB/s (149MB/s)(1443MiB/10183msec); 0 zone resets 00:26:17.211 slat (usec): min=22, max=88323, avg=1674.85, stdev=4064.50 00:26:17.211 clat (msec): min=3, max=401, avg=111.22, stdev=63.95 00:26:17.211 lat (msec): min=4, max=401, avg=112.89, stdev=64.72 00:26:17.211 clat percentiles (msec): 00:26:17.211 | 1.00th=[ 20], 5.00th=[ 40], 10.00th=[ 46], 20.00th=[ 49], 00:26:17.211 | 30.00th=[ 68], 40.00th=[ 77], 50.00th=[ 96], 60.00th=[ 125], 00:26:17.211 | 70.00th=[ 142], 80.00th=[ 159], 90.00th=[ 211], 95.00th=[ 236], 00:26:17.211 | 99.00th=[ 266], 99.50th=[ 300], 99.90th=[ 388], 99.95th=[ 388], 00:26:17.211 | 99.99th=[ 401] 00:26:17.211 bw ( KiB/s): min=67584, max=324096, per=8.65%, avg=146069.85, stdev=79818.25, samples=20 00:26:17.211 iops : min= 264, max= 1266, avg=570.55, stdev=311.72, samples=20 00:26:17.211 lat (msec) : 4=0.02%, 10=0.24%, 20=0.85%, 50=22.32%, 100=28.01% 00:26:17.211 lat (msec) : 250=45.62%, 500=2.95% 00:26:17.211 cpu : usr=1.34%, sys=1.95%, ctx=1587, majf=0, minf=1 00:26:17.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:17.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:17.211 issued rwts: total=0,5770,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.211 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:17.211 job1: (groupid=0, jobs=1): err= 0: pid=2485221: Sun Jul 14 10:35:01 2024 00:26:17.211 write: IOPS=444, BW=111MiB/s (116MB/s)(1131MiB/10188msec); 0 zone resets 00:26:17.211 slat (usec): min=18, max=131795, avg=1832.20, stdev=4894.60 00:26:17.211 clat (msec): min=2, max=388, avg=142.23, stdev=61.06 00:26:17.211 lat (msec): min=2, max=389, avg=144.06, stdev=61.93 00:26:17.211 clat percentiles (msec): 00:26:17.211 | 1.00th=[ 9], 5.00th=[ 31], 10.00th=[ 61], 20.00th=[ 101], 00:26:17.211 | 30.00th=[ 114], 40.00th=[ 130], 50.00th=[ 138], 60.00th=[ 155], 00:26:17.211 | 70.00th=[ 169], 80.00th=[ 186], 90.00th=[ 222], 95.00th=[ 247], 00:26:17.211 | 99.00th=[ 288], 99.50th=[ 321], 99.90th=[ 376], 99.95th=[ 376], 00:26:17.211 | 99.99th=[ 388] 00:26:17.211 bw ( KiB/s): min=67584, max=159744, per=6.76%, avg=114167.20, stdev=27278.04, samples=20 00:26:17.211 iops : min= 264, max= 624, avg=445.95, stdev=106.57, samples=20 00:26:17.211 lat (msec) : 4=0.15%, 10=1.06%, 20=1.83%, 50=5.24%, 100=12.05% 00:26:17.211 lat (msec) : 250=75.20%, 500=4.47% 00:26:17.211 cpu : usr=1.08%, sys=1.33%, ctx=2132, majf=0, minf=1 00:26:17.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:17.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:17.212 issued rwts: total=0,4524,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.212 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:17.212 job2: (groupid=0, jobs=1): err= 0: pid=2485222: Sun Jul 14 10:35:01 2024 00:26:17.212 write: IOPS=643, BW=161MiB/s (169MB/s)(1640MiB/10186msec); 0 zone resets 00:26:17.212 slat (usec): min=28, max=114819, avg=1424.65, stdev=3150.61 00:26:17.212 clat (msec): min=2, max=410, avg=97.89, stdev=44.41 00:26:17.212 lat (msec): min=2, max=410, avg=99.31, stdev=44.90 00:26:17.212 clat percentiles (msec): 00:26:17.212 | 1.00th=[ 17], 5.00th=[ 39], 10.00th=[ 45], 20.00th=[ 70], 00:26:17.212 | 30.00th=[ 73], 40.00th=[ 78], 50.00th=[ 92], 60.00th=[ 104], 00:26:17.212 | 70.00th=[ 117], 80.00th=[ 136], 90.00th=[ 150], 95.00th=[ 169], 00:26:17.212 | 99.00th=[ 220], 99.50th=[ 305], 99.90th=[ 384], 99.95th=[ 397], 00:26:17.212 | 99.99th=[ 409] 00:26:17.212 bw ( KiB/s): min=77824, max=302592, per=9.84%, avg=166259.65, stdev=57561.43, samples=20 00:26:17.212 iops : min= 304, max= 1182, avg=649.45, stdev=224.85, samples=20 00:26:17.212 lat (msec) : 4=0.05%, 10=0.37%, 20=1.11%, 50=10.76%, 100=44.76% 00:26:17.212 lat (msec) : 250=42.25%, 500=0.70% 00:26:17.212 cpu : usr=1.77%, sys=2.02%, ctx=2111, majf=0, minf=1 00:26:17.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:26:17.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:17.212 issued rwts: total=0,6559,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.212 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:17.212 job3: (groupid=0, jobs=1): err= 0: pid=2485223: Sun Jul 14 10:35:01 2024 00:26:17.212 write: IOPS=539, BW=135MiB/s (141MB/s)(1358MiB/10073msec); 0 zone resets 00:26:17.212 slat (usec): min=25, max=77213, avg=1518.37, stdev=3970.72 00:26:17.212 clat (usec): min=1428, max=288025, avg=117143.32, stdev=62319.04 00:26:17.212 lat (usec): min=1475, max=298955, avg=118661.69, stdev=63126.41 00:26:17.212 clat percentiles (msec): 00:26:17.212 | 1.00th=[ 10], 5.00th=[ 28], 10.00th=[ 44], 20.00th=[ 72], 00:26:17.212 | 30.00th=[ 79], 40.00th=[ 88], 50.00th=[ 103], 60.00th=[ 116], 00:26:17.212 | 70.00th=[ 148], 80.00th=[ 180], 90.00th=[ 218], 95.00th=[ 234], 00:26:17.212 | 99.00th=[ 249], 99.50th=[ 251], 99.90th=[ 253], 99.95th=[ 271], 00:26:17.212 | 99.99th=[ 288] 00:26:17.212 bw ( KiB/s): min=65536, max=224768, per=8.13%, avg=137386.65, stdev=52761.81, samples=20 00:26:17.212 iops : min= 256, max= 878, avg=536.65, stdev=206.12, samples=20 00:26:17.212 lat (msec) : 2=0.04%, 4=0.15%, 10=1.03%, 20=2.60%, 50=8.38% 00:26:17.212 lat (msec) : 100=36.17%, 250=51.09%, 500=0.55% 00:26:17.212 cpu : usr=1.19%, sys=1.63%, ctx=2481, majf=0, minf=1 00:26:17.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:17.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:17.212 issued rwts: total=0,5430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.212 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:17.212 job4: (groupid=0, jobs=1): err= 0: pid=2485224: Sun Jul 14 10:35:01 2024 00:26:17.212 write: IOPS=637, BW=159MiB/s (167MB/s)(1605MiB/10074msec); 0 zone resets 00:26:17.212 slat (usec): min=19, max=38689, avg=1378.44, stdev=2979.71 00:26:17.212 clat (msec): min=2, max=235, avg=99.03, stdev=47.34 00:26:17.212 lat (msec): min=3, max=236, avg=100.41, stdev=47.92 00:26:17.212 clat percentiles (msec): 00:26:17.212 | 1.00th=[ 28], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 44], 00:26:17.212 | 30.00th=[ 67], 40.00th=[ 80], 50.00th=[ 100], 60.00th=[ 113], 00:26:17.212 | 70.00th=[ 130], 80.00th=[ 144], 90.00th=[ 165], 95.00th=[ 180], 00:26:17.212 | 99.00th=[ 211], 99.50th=[ 222], 99.90th=[ 234], 99.95th=[ 236], 00:26:17.212 | 99.99th=[ 236] 00:26:17.212 bw ( KiB/s): min=79872, max=376320, per=9.63%, avg=162704.65, stdev=76693.14, samples=20 00:26:17.212 iops : min= 312, max= 1470, avg=635.55, stdev=299.60, samples=20 00:26:17.212 lat (msec) : 4=0.03%, 10=0.14%, 20=0.39%, 50=24.55%, 100=25.19% 00:26:17.212 lat (msec) : 250=49.70% 00:26:17.212 cpu : usr=1.50%, sys=1.97%, ctx=2137, majf=0, minf=1 00:26:17.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:26:17.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:17.212 issued rwts: total=0,6419,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.212 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:17.212 job5: (groupid=0, jobs=1): err= 0: pid=2485225: Sun Jul 14 10:35:01 2024 00:26:17.212 write: IOPS=595, BW=149MiB/s (156MB/s)(1515MiB/10182msec); 0 zone resets 00:26:17.212 slat (usec): min=21, max=80157, avg=1135.33, stdev=3139.06 00:26:17.212 clat (msec): min=2, max=405, avg=106.33, stdev=60.16 00:26:17.212 lat (msec): min=2, max=405, avg=107.47, stdev=60.77 00:26:17.212 clat percentiles (msec): 00:26:17.212 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 24], 20.00th=[ 63], 00:26:17.212 | 30.00th=[ 73], 40.00th=[ 90], 50.00th=[ 102], 60.00th=[ 110], 00:26:17.212 | 70.00th=[ 134], 80.00th=[ 159], 90.00th=[ 186], 95.00th=[ 209], 00:26:17.212 | 99.00th=[ 292], 99.50th=[ 309], 99.90th=[ 363], 99.95th=[ 380], 00:26:17.212 | 99.99th=[ 405] 00:26:17.212 bw ( KiB/s): min=83456, max=277504, per=9.09%, avg=153540.20, stdev=50245.87, samples=20 00:26:17.212 iops : min= 326, max= 1084, avg=599.75, stdev=196.30, samples=20 00:26:17.212 lat (msec) : 4=0.41%, 10=2.99%, 20=4.92%, 50=8.73%, 100=31.84% 00:26:17.212 lat (msec) : 250=49.38%, 500=1.73% 00:26:17.212 cpu : usr=1.38%, sys=2.03%, ctx=3401, majf=0, minf=1 00:26:17.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:17.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:17.212 issued rwts: total=0,6061,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.212 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:17.212 job6: (groupid=0, jobs=1): err= 0: pid=2485226: Sun Jul 14 10:35:01 2024 00:26:17.212 write: IOPS=607, BW=152MiB/s (159MB/s)(1529MiB/10073msec); 0 zone resets 00:26:17.212 slat (usec): min=27, max=111238, avg=1115.82, stdev=3849.43 00:26:17.212 clat (usec): min=1271, max=338896, avg=104230.38, stdev=64409.70 00:26:17.212 lat (usec): min=1801, max=338942, avg=105346.20, stdev=65135.74 00:26:17.212 clat percentiles (msec): 00:26:17.212 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 18], 20.00th=[ 38], 00:26:17.212 | 30.00th=[ 71], 40.00th=[ 91], 50.00th=[ 103], 60.00th=[ 125], 00:26:17.212 | 70.00th=[ 136], 80.00th=[ 153], 90.00th=[ 176], 95.00th=[ 232], 00:26:17.212 | 99.00th=[ 279], 99.50th=[ 284], 99.90th=[ 296], 99.95th=[ 300], 00:26:17.212 | 99.99th=[ 338] 00:26:17.212 bw ( KiB/s): min=71680, max=271360, per=9.18%, avg=154989.30, stdev=48774.21, samples=20 00:26:17.212 iops : min= 280, max= 1060, avg=605.40, stdev=190.52, samples=20 00:26:17.212 lat (msec) : 2=0.10%, 4=0.95%, 10=4.56%, 20=6.08%, 50=13.32% 00:26:17.212 lat (msec) : 100=23.10%, 250=48.46%, 500=3.43% 00:26:17.212 cpu : usr=1.30%, sys=1.92%, ctx=3670, majf=0, minf=1 00:26:17.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:17.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:17.212 issued rwts: total=0,6117,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.212 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:17.212 job7: (groupid=0, jobs=1): err= 0: pid=2485227: Sun Jul 14 10:35:01 2024 00:26:17.212 write: IOPS=523, BW=131MiB/s (137MB/s)(1333MiB/10186msec); 0 zone resets 00:26:17.212 slat (usec): min=20, max=86806, avg=1277.98, stdev=3621.71 00:26:17.212 clat (msec): min=2, max=393, avg=120.95, stdev=63.18 00:26:17.212 lat (msec): min=2, max=393, avg=122.22, stdev=63.90 00:26:17.212 clat percentiles (msec): 00:26:17.212 | 1.00th=[ 6], 5.00th=[ 16], 10.00th=[ 36], 20.00th=[ 66], 00:26:17.212 | 30.00th=[ 83], 40.00th=[ 108], 50.00th=[ 127], 60.00th=[ 136], 00:26:17.212 | 70.00th=[ 150], 80.00th=[ 174], 90.00th=[ 201], 95.00th=[ 220], 00:26:17.212 | 99.00th=[ 284], 99.50th=[ 300], 99.90th=[ 380], 99.95th=[ 380], 00:26:17.212 | 99.99th=[ 393] 00:26:17.212 bw ( KiB/s): min=74752, max=229376, per=7.98%, avg=134851.95, stdev=48424.86, samples=20 00:26:17.212 iops : min= 292, max= 896, avg=526.75, stdev=189.18, samples=20 00:26:17.212 lat (msec) : 4=0.47%, 10=2.19%, 20=4.16%, 50=8.14%, 100=22.32% 00:26:17.212 lat (msec) : 250=60.44%, 500=2.27% 00:26:17.212 cpu : usr=1.18%, sys=1.57%, ctx=3082, majf=0, minf=1 00:26:17.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:17.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:17.212 issued rwts: total=0,5331,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.212 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:17.212 job8: (groupid=0, jobs=1): err= 0: pid=2485228: Sun Jul 14 10:35:01 2024 00:26:17.212 write: IOPS=767, BW=192MiB/s (201MB/s)(1951MiB/10172msec); 0 zone resets 00:26:17.212 slat (usec): min=22, max=50863, avg=1098.45, stdev=2829.34 00:26:17.212 clat (usec): min=1270, max=372298, avg=82288.60, stdev=59246.41 00:26:17.212 lat (usec): min=1333, max=372356, avg=83387.05, stdev=59965.89 00:26:17.212 clat percentiles (msec): 00:26:17.212 | 1.00th=[ 11], 5.00th=[ 37], 10.00th=[ 39], 20.00th=[ 40], 00:26:17.212 | 30.00th=[ 41], 40.00th=[ 43], 50.00th=[ 67], 60.00th=[ 73], 00:26:17.212 | 70.00th=[ 88], 80.00th=[ 118], 90.00th=[ 182], 95.00th=[ 213], 00:26:17.212 | 99.00th=[ 268], 99.50th=[ 279], 99.90th=[ 347], 99.95th=[ 359], 00:26:17.212 | 99.99th=[ 372] 00:26:17.212 bw ( KiB/s): min=73728, max=405504, per=11.73%, avg=198148.50, stdev=101953.47, samples=20 00:26:17.212 iops : min= 288, max= 1584, avg=773.95, stdev=398.32, samples=20 00:26:17.212 lat (msec) : 2=0.05%, 4=0.13%, 10=0.79%, 20=1.04%, 50=41.87% 00:26:17.212 lat (msec) : 100=29.62%, 250=24.75%, 500=1.76% 00:26:17.212 cpu : usr=1.93%, sys=2.19%, ctx=2880, majf=0, minf=1 00:26:17.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:17.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:17.212 issued rwts: total=0,7803,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.212 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:17.212 job9: (groupid=0, jobs=1): err= 0: pid=2485229: Sun Jul 14 10:35:01 2024 00:26:17.212 write: IOPS=798, BW=200MiB/s (209MB/s)(2012MiB/10072msec); 0 zone resets 00:26:17.212 slat (usec): min=26, max=36965, avg=1077.52, stdev=2430.42 00:26:17.212 clat (usec): min=867, max=261935, avg=78999.03, stdev=42903.67 00:26:17.212 lat (usec): min=903, max=261986, avg=80076.55, stdev=43444.55 00:26:17.212 clat percentiles (msec): 00:26:17.212 | 1.00th=[ 4], 5.00th=[ 20], 10.00th=[ 39], 20.00th=[ 41], 00:26:17.212 | 30.00th=[ 43], 40.00th=[ 59], 50.00th=[ 74], 60.00th=[ 83], 00:26:17.212 | 70.00th=[ 103], 80.00th=[ 127], 90.00th=[ 138], 95.00th=[ 150], 00:26:17.212 | 99.00th=[ 176], 99.50th=[ 190], 99.90th=[ 245], 99.95th=[ 255], 00:26:17.212 | 99.99th=[ 262] 00:26:17.213 bw ( KiB/s): min=110592, max=410624, per=12.10%, avg=204369.95, stdev=85947.58, samples=20 00:26:17.213 iops : min= 432, max= 1604, avg=798.30, stdev=335.74, samples=20 00:26:17.213 lat (usec) : 1000=0.01% 00:26:17.213 lat (msec) : 2=0.45%, 4=0.57%, 10=1.45%, 20=2.63%, 50=30.21% 00:26:17.213 lat (msec) : 100=33.63%, 250=30.96%, 500=0.07% 00:26:17.213 cpu : usr=1.93%, sys=2.59%, ctx=2906, majf=0, minf=1 00:26:17.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:17.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:17.213 issued rwts: total=0,8046,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.213 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:17.213 job10: (groupid=0, jobs=1): err= 0: pid=2485230: Sun Jul 14 10:35:01 2024 00:26:17.213 write: IOPS=505, BW=126MiB/s (133MB/s)(1288MiB/10184msec); 0 zone resets 00:26:17.213 slat (usec): min=22, max=38214, avg=1487.09, stdev=3456.01 00:26:17.213 clat (usec): min=1549, max=410492, avg=125014.10, stdev=58192.18 00:26:17.213 lat (usec): min=1605, max=410535, avg=126501.19, stdev=58777.97 00:26:17.213 clat percentiles (msec): 00:26:17.213 | 1.00th=[ 6], 5.00th=[ 16], 10.00th=[ 34], 20.00th=[ 82], 00:26:17.213 | 30.00th=[ 108], 40.00th=[ 120], 50.00th=[ 132], 60.00th=[ 138], 00:26:17.213 | 70.00th=[ 146], 80.00th=[ 165], 90.00th=[ 194], 95.00th=[ 222], 00:26:17.213 | 99.00th=[ 255], 99.50th=[ 330], 99.90th=[ 397], 99.95th=[ 397], 00:26:17.213 | 99.99th=[ 409] 00:26:17.213 bw ( KiB/s): min=77824, max=234496, per=7.71%, avg=130185.35, stdev=38881.09, samples=20 00:26:17.213 iops : min= 304, max= 916, avg=508.50, stdev=151.85, samples=20 00:26:17.213 lat (msec) : 2=0.04%, 4=0.66%, 10=2.19%, 20=4.21%, 50=6.29% 00:26:17.213 lat (msec) : 100=12.17%, 250=73.20%, 500=1.22% 00:26:17.213 cpu : usr=1.24%, sys=1.64%, ctx=2480, majf=0, minf=1 00:26:17.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:17.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:17.213 issued rwts: total=0,5150,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.213 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:17.213 00:26:17.213 Run status group 0 (all jobs): 00:26:17.213 WRITE: bw=1649MiB/s (1729MB/s), 111MiB/s-200MiB/s (116MB/s-209MB/s), io=16.4GiB (17.6GB), run=10072-10188msec 00:26:17.213 00:26:17.213 Disk stats (read/write): 00:26:17.213 nvme0n1: ios=53/11532, merge=0/0, ticks=2816/1212795, in_queue=1215611, util=99.94% 00:26:17.213 nvme10n1: ios=47/9037, merge=0/0, ticks=341/1241066, in_queue=1241407, util=98.51% 00:26:17.213 nvme1n1: ios=45/13106, merge=0/0, ticks=1574/1228164, in_queue=1229738, util=100.00% 00:26:17.213 nvme2n1: ios=50/10623, merge=0/0, ticks=995/1217994, in_queue=1218989, util=100.00% 00:26:17.213 nvme3n1: ios=18/12601, merge=0/0, ticks=141/1214574, in_queue=1214715, util=97.86% 00:26:17.213 nvme4n1: ios=24/12114, merge=0/0, ticks=269/1248896, in_queue=1249165, util=99.90% 00:26:17.213 nvme5n1: ios=0/11982, merge=0/0, ticks=0/1223008, in_queue=1223008, util=98.25% 00:26:17.213 nvme6n1: ios=0/10649, merge=0/0, ticks=0/1249058, in_queue=1249058, util=98.41% 00:26:17.213 nvme7n1: ios=0/15434, merge=0/0, ticks=0/1206494, in_queue=1206494, util=98.75% 00:26:17.213 nvme8n1: ios=44/15842, merge=0/0, ticks=1680/1210253, in_queue=1211933, util=100.00% 00:26:17.213 nvme9n1: ios=0/10290, merge=0/0, ticks=0/1245113, in_queue=1245113, util=99.07% 00:26:17.213 10:35:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:17.213 10:35:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:17.213 10:35:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.213 10:35:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:17.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:17.213 10:35:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:17.213 10:35:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:17.213 10:35:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:17.213 10:35:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:26:17.213 10:35:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:17.213 10:35:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:26:17.213 10:35:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:17.213 10:35:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:17.213 10:35:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.213 10:35:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.213 10:35:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.213 10:35:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.213 10:35:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:17.213 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:17.213 10:35:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:17.213 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:17.213 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:17.213 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:26:17.213 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:17.213 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:26:17.213 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:17.213 10:35:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:17.213 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.213 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.213 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.213 10:35:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.213 10:35:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:17.473 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:17.473 10:35:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:17.473 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:17.473 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:17.473 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:26:17.473 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:17.473 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:26:17.473 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:17.473 10:35:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:17.473 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.473 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.473 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.473 10:35:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.473 10:35:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:17.732 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:17.732 10:35:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:17.732 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:17.732 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:17.732 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:26:17.732 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:26:17.732 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:17.732 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:17.732 10:35:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:17.732 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.732 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.732 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.732 10:35:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.732 10:35:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:17.992 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:17.992 10:35:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:17.992 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:17.992 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:18.251 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:26:18.252 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:26:18.252 10:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:18.252 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:18.252 10:35:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:18.252 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.252 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:18.252 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.252 10:35:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:18.252 10:35:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:18.509 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:18.509 10:35:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:18.509 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:18.509 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:18.509 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:26:18.509 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:18.509 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:26:18.509 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:18.509 10:35:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:18.509 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.509 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:18.509 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.509 10:35:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:18.509 10:35:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:18.509 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:18.509 10:35:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:18.509 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:18.509 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:18.509 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:26:18.509 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:18.509 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:26:18.509 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:18.509 10:35:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:18.509 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.509 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:18.509 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.509 10:35:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:18.509 10:35:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:18.766 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:18.766 10:35:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:18.766 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:18.766 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:18.766 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:26:18.766 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:26:18.766 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:18.766 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:18.766 10:35:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:18.766 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.766 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:18.766 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.766 10:35:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:18.766 10:35:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:18.766 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:18.766 10:35:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:18.766 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:19.024 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:19.024 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:26:19.024 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:19.024 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:26:19.024 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:19.024 10:35:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:19.024 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.024 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.024 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.024 10:35:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:19.024 10:35:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:19.024 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:19.024 10:35:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:19.024 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:19.024 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:19.024 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:26:19.024 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:19.024 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:26:19.024 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:19.024 10:35:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:19.024 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.024 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.024 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.024 10:35:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:19.024 10:35:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:19.024 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:19.024 10:35:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:19.024 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:19.024 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:19.024 10:35:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:19.283 rmmod nvme_tcp 00:26:19.283 rmmod nvme_fabrics 00:26:19.283 rmmod nvme_keyring 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 2477213 ']' 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 2477213 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 2477213 ']' 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 2477213 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2477213 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2477213' 00:26:19.283 killing process with pid 2477213 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 2477213 00:26:19.283 10:35:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 2477213 00:26:19.850 10:35:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:19.850 10:35:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:19.850 10:35:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:19.850 10:35:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:19.850 10:35:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:19.850 10:35:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.851 10:35:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:19.851 10:35:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.756 10:35:06 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:21.756 00:26:21.756 real 1m10.036s 00:26:21.756 user 4m7.655s 00:26:21.756 sys 0m24.704s 00:26:21.756 10:35:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:21.756 10:35:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:21.756 ************************************ 00:26:21.756 END TEST nvmf_multiconnection 00:26:21.756 ************************************ 00:26:21.756 10:35:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:21.756 10:35:06 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:21.756 10:35:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:21.756 10:35:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.756 10:35:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:21.756 ************************************ 00:26:21.756 START TEST nvmf_initiator_timeout 00:26:21.756 ************************************ 00:26:21.756 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:22.016 * Looking for test storage... 00:26:22.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:26:22.016 10:35:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:28.586 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:28.587 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:28.587 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:28.587 Found net devices under 0000:86:00.0: cvl_0_0 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:28.587 Found net devices under 0000:86:00.1: cvl_0_1 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:28.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:26:28.587 00:26:28.587 --- 10.0.0.2 ping statistics --- 00:26:28.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.587 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:28.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:26:28.587 00:26:28.587 --- 10.0.0.1 ping statistics --- 00:26:28.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.587 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=2490471 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 2490471 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 2490471 ']' 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:28.587 [2024-07-14 10:35:12.630691] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:26:28.587 [2024-07-14 10:35:12.630736] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.587 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.587 [2024-07-14 10:35:12.699285] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:28.587 [2024-07-14 10:35:12.740722] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.587 [2024-07-14 10:35:12.740762] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.587 [2024-07-14 10:35:12.740770] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:28.587 [2024-07-14 10:35:12.740776] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:28.587 [2024-07-14 10:35:12.740781] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.587 [2024-07-14 10:35:12.740833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.587 [2024-07-14 10:35:12.740940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:28.587 [2024-07-14 10:35:12.741051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.587 [2024-07-14 10:35:12.741052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:28.587 Malloc0 00:26:28.587 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.588 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:28.588 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.588 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:28.588 Delay0 00:26:28.588 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.588 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:28.588 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.588 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:28.588 [2024-07-14 10:35:12.915797] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.588 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.588 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:28.588 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.588 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:28.588 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.588 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:28.588 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.588 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:28.588 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.588 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:28.588 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.588 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:28.588 [2024-07-14 10:35:12.940580] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:28.588 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.588 10:35:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:29.155 10:35:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:29.156 10:35:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:26:29.156 10:35:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:29.156 10:35:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:29.156 10:35:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:26:31.689 10:35:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:31.689 10:35:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:31.689 10:35:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:26:31.689 10:35:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:31.689 10:35:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:31.689 10:35:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:26:31.689 10:35:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=2491137 00:26:31.689 10:35:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:31.689 10:35:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:31.689 [global] 00:26:31.689 thread=1 00:26:31.689 invalidate=1 00:26:31.689 rw=write 00:26:31.689 time_based=1 00:26:31.689 runtime=60 00:26:31.689 ioengine=libaio 00:26:31.689 direct=1 00:26:31.689 bs=4096 00:26:31.689 iodepth=1 00:26:31.689 norandommap=0 00:26:31.689 numjobs=1 00:26:31.689 00:26:31.689 verify_dump=1 00:26:31.689 verify_backlog=512 00:26:31.689 verify_state_save=0 00:26:31.689 do_verify=1 00:26:31.689 verify=crc32c-intel 00:26:31.689 [job0] 00:26:31.689 filename=/dev/nvme0n1 00:26:31.689 Could not set queue depth (nvme0n1) 00:26:31.689 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:31.689 fio-3.35 00:26:31.689 Starting 1 thread 00:26:34.230 10:35:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:34.230 10:35:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.230 10:35:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:34.231 true 00:26:34.231 10:35:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.231 10:35:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:34.231 10:35:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.231 10:35:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:34.231 true 00:26:34.231 10:35:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.231 10:35:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:34.231 10:35:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.231 10:35:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:34.231 true 00:26:34.231 10:35:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.231 10:35:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:34.231 10:35:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.231 10:35:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:34.231 true 00:26:34.231 10:35:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.231 10:35:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:37.584 10:35:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:37.584 10:35:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.584 10:35:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:37.584 true 00:26:37.584 10:35:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.584 10:35:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:37.584 10:35:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.584 10:35:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:37.584 true 00:26:37.584 10:35:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.584 10:35:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:37.584 10:35:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.584 10:35:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:37.584 true 00:26:37.584 10:35:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.584 10:35:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:37.584 10:35:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.584 10:35:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:37.584 true 00:26:37.584 10:35:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.584 10:35:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:37.584 10:35:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 2491137 00:27:33.823 00:27:33.823 job0: (groupid=0, jobs=1): err= 0: pid=2491257: Sun Jul 14 10:36:16 2024 00:27:33.823 read: IOPS=316, BW=1267KiB/s (1297kB/s)(74.2MiB/60000msec) 00:27:33.823 slat (usec): min=6, max=9984, avg= 9.07, stdev=90.97 00:27:33.823 clat (usec): min=219, max=41525k, avg=2932.81, stdev=301237.35 00:27:33.823 lat (usec): min=227, max=41525k, avg=2941.88, stdev=301237.37 00:27:33.823 clat percentiles (usec): 00:27:33.823 | 1.00th=[ 231], 5.00th=[ 239], 10.00th=[ 247], 20.00th=[ 253], 00:27:33.823 | 30.00th=[ 258], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:27:33.823 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 314], 00:27:33.823 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:27:33.823 | 99.99th=[42730] 00:27:33.823 write: IOPS=324, BW=1297KiB/s (1328kB/s)(76.0MiB/60000msec); 0 zone resets 00:27:33.823 slat (usec): min=10, max=585, avg=11.69, stdev= 5.42 00:27:33.823 clat (usec): min=151, max=463, avg=192.35, stdev=17.05 00:27:33.823 lat (usec): min=167, max=826, avg=204.04, stdev=18.02 00:27:33.823 clat percentiles (usec): 00:27:33.823 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 182], 00:27:33.823 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:27:33.823 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 217], 00:27:33.823 | 99.00th=[ 260], 99.50th=[ 293], 99.90th=[ 314], 99.95th=[ 318], 00:27:33.823 | 99.99th=[ 412] 00:27:33.823 bw ( KiB/s): min= 2464, max= 8712, per=100.00%, avg=7760.84, stdev=1613.39, samples=19 00:27:33.823 iops : min= 616, max= 2178, avg=1940.21, stdev=403.35, samples=19 00:27:33.823 lat (usec) : 250=57.30%, 500=42.10%, 750=0.01% 00:27:33.823 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 50=0.58%, >=2000=0.01% 00:27:33.823 cpu : usr=0.55%, sys=1.01%, ctx=38464, majf=0, minf=2 00:27:33.823 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:33.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.823 issued rwts: total=19005,19456,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.823 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:33.823 00:27:33.823 Run status group 0 (all jobs): 00:27:33.823 READ: bw=1267KiB/s (1297kB/s), 1267KiB/s-1267KiB/s (1297kB/s-1297kB/s), io=74.2MiB (77.8MB), run=60000-60000msec 00:27:33.823 WRITE: bw=1297KiB/s (1328kB/s), 1297KiB/s-1297KiB/s (1328kB/s-1328kB/s), io=76.0MiB (79.7MB), run=60000-60000msec 00:27:33.823 00:27:33.823 Disk stats (read/write): 00:27:33.823 nvme0n1: ios=19100/19046, merge=0/0, ticks=14119/3498, in_queue=17617, util=99.54% 00:27:33.823 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:33.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:33.823 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:33.823 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:27:33.823 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:33.823 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:33.823 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:33.823 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:33.823 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:27:33.823 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:33.823 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:33.823 nvmf hotplug test: fio successful as expected 00:27:33.823 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:33.823 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.823 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:33.823 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.823 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:33.823 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:33.823 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:33.823 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:33.823 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:27:33.823 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:33.824 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:27:33.824 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:33.824 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:33.824 rmmod nvme_tcp 00:27:33.824 rmmod nvme_fabrics 00:27:33.824 rmmod nvme_keyring 00:27:33.824 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:33.824 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:27:33.824 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:27:33.824 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 2490471 ']' 00:27:33.824 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 2490471 00:27:33.824 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 2490471 ']' 00:27:33.824 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 2490471 00:27:33.824 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:27:33.824 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:33.824 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2490471 00:27:33.824 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:33.824 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:33.824 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2490471' 00:27:33.824 killing process with pid 2490471 00:27:33.824 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 2490471 00:27:33.824 10:36:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 2490471 00:27:33.824 10:36:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:33.824 10:36:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:33.824 10:36:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:33.824 10:36:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:33.824 10:36:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:33.824 10:36:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.824 10:36:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:33.824 10:36:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.392 10:36:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:34.392 00:27:34.392 real 1m12.381s 00:27:34.392 user 4m22.265s 00:27:34.392 sys 0m6.871s 00:27:34.392 10:36:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:34.392 10:36:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:34.392 ************************************ 00:27:34.392 END TEST nvmf_initiator_timeout 00:27:34.392 ************************************ 00:27:34.392 10:36:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:34.392 10:36:19 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:27:34.392 10:36:19 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:27:34.392 10:36:19 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:27:34.392 10:36:19 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:27:34.392 10:36:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:39.668 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:39.668 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:39.668 Found net devices under 0000:86:00.0: cvl_0_0 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:39.668 Found net devices under 0000:86:00.1: cvl_0_1 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:27:39.668 10:36:24 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:39.668 10:36:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:39.668 10:36:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:39.668 10:36:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:39.668 ************************************ 00:27:39.668 START TEST nvmf_perf_adq 00:27:39.668 ************************************ 00:27:39.668 10:36:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:39.668 * Looking for test storage... 00:27:39.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:39.668 10:36:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:39.668 10:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:39.668 10:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:39.668 10:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:39.668 10:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:39.668 10:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:39.668 10:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:39.668 10:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:39.668 10:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:39.668 10:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:39.668 10:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:39.668 10:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:39.668 10:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:39.668 10:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:39.668 10:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:39.668 10:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:39.668 10:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:39.668 10:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:39.668 10:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:39.668 10:36:24 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:39.668 10:36:24 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:39.668 10:36:24 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:39.668 10:36:24 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.668 10:36:24 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.668 10:36:24 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.668 10:36:24 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:39.669 10:36:24 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.669 10:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:27:39.669 10:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:39.669 10:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:39.669 10:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:39.669 10:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:39.928 10:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:39.928 10:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:39.928 10:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:39.928 10:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:39.928 10:36:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:39.928 10:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:39.928 10:36:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:45.204 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.204 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:45.204 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:45.204 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:45.204 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:45.204 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:45.204 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:45.204 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:45.204 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:45.204 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:45.204 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:45.204 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:45.204 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:45.204 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:45.204 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:45.204 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.204 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:45.204 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:45.204 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:45.204 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:45.204 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:45.204 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:45.204 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:45.204 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:45.204 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:45.204 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:45.204 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:45.204 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:45.205 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:45.205 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:45.205 Found net devices under 0000:86:00.0: cvl_0_0 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:45.205 Found net devices under 0000:86:00.1: cvl_0_1 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:27:45.205 10:36:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:46.583 10:36:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:48.486 10:36:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:53.825 10:36:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:27:53.825 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:53.825 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:53.825 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:53.825 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:53.825 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:53.825 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.825 10:36:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:53.825 10:36:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.825 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:53.825 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:53.825 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:53.825 10:36:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:53.825 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:53.825 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:53.825 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:53.825 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:53.825 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:53.825 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:53.825 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:53.826 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:53.826 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:53.826 Found net devices under 0000:86:00.0: cvl_0_0 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:53.826 Found net devices under 0000:86:00.1: cvl_0_1 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:53.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:53.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:27:53.826 00:27:53.826 --- 10.0.0.2 ping statistics --- 00:27:53.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.826 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:53.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:53.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:27:53.826 00:27:53.826 --- 10.0.0.1 ping statistics --- 00:27:53.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.826 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2509330 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2509330 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2509330 ']' 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:53.826 10:36:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:53.826 [2024-07-14 10:36:38.623377] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:27:53.826 [2024-07-14 10:36:38.623423] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:53.826 EAL: No free 2048 kB hugepages reported on node 1 00:27:53.826 [2024-07-14 10:36:38.694631] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:53.826 [2024-07-14 10:36:38.736163] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:53.826 [2024-07-14 10:36:38.736202] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:53.826 [2024-07-14 10:36:38.736210] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:53.826 [2024-07-14 10:36:38.736216] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:53.827 [2024-07-14 10:36:38.736221] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:53.827 [2024-07-14 10:36:38.736286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.827 [2024-07-14 10:36:38.736395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:53.827 [2024-07-14 10:36:38.736499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.827 [2024-07-14 10:36:38.736501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:54.763 [2024-07-14 10:36:39.606445] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:54.763 Malloc1 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:54.763 [2024-07-14 10:36:39.654262] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2509488 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:27:54.763 10:36:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:54.764 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.299 10:36:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:27:57.299 10:36:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.299 10:36:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:57.300 10:36:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.300 10:36:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:27:57.300 "tick_rate": 2300000000, 00:27:57.300 "poll_groups": [ 00:27:57.300 { 00:27:57.300 "name": "nvmf_tgt_poll_group_000", 00:27:57.300 "admin_qpairs": 1, 00:27:57.300 "io_qpairs": 1, 00:27:57.300 "current_admin_qpairs": 1, 00:27:57.300 "current_io_qpairs": 1, 00:27:57.300 "pending_bdev_io": 0, 00:27:57.300 "completed_nvme_io": 21138, 00:27:57.300 "transports": [ 00:27:57.300 { 00:27:57.300 "trtype": "TCP" 00:27:57.300 } 00:27:57.300 ] 00:27:57.300 }, 00:27:57.300 { 00:27:57.300 "name": "nvmf_tgt_poll_group_001", 00:27:57.300 "admin_qpairs": 0, 00:27:57.300 "io_qpairs": 1, 00:27:57.300 "current_admin_qpairs": 0, 00:27:57.300 "current_io_qpairs": 1, 00:27:57.300 "pending_bdev_io": 0, 00:27:57.300 "completed_nvme_io": 21279, 00:27:57.300 "transports": [ 00:27:57.300 { 00:27:57.300 "trtype": "TCP" 00:27:57.300 } 00:27:57.300 ] 00:27:57.300 }, 00:27:57.300 { 00:27:57.300 "name": "nvmf_tgt_poll_group_002", 00:27:57.300 "admin_qpairs": 0, 00:27:57.300 "io_qpairs": 1, 00:27:57.300 "current_admin_qpairs": 0, 00:27:57.300 "current_io_qpairs": 1, 00:27:57.300 "pending_bdev_io": 0, 00:27:57.300 "completed_nvme_io": 21183, 00:27:57.300 "transports": [ 00:27:57.300 { 00:27:57.300 "trtype": "TCP" 00:27:57.300 } 00:27:57.300 ] 00:27:57.300 }, 00:27:57.300 { 00:27:57.300 "name": "nvmf_tgt_poll_group_003", 00:27:57.300 "admin_qpairs": 0, 00:27:57.300 "io_qpairs": 1, 00:27:57.300 "current_admin_qpairs": 0, 00:27:57.300 "current_io_qpairs": 1, 00:27:57.300 "pending_bdev_io": 0, 00:27:57.300 "completed_nvme_io": 21185, 00:27:57.300 "transports": [ 00:27:57.300 { 00:27:57.300 "trtype": "TCP" 00:27:57.300 } 00:27:57.300 ] 00:27:57.300 } 00:27:57.300 ] 00:27:57.300 }' 00:27:57.300 10:36:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:57.300 10:36:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:27:57.300 10:36:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:27:57.300 10:36:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:27:57.300 10:36:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2509488 00:28:05.416 Initializing NVMe Controllers 00:28:05.416 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:05.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:05.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:05.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:05.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:05.416 Initialization complete. Launching workers. 00:28:05.416 ======================================================== 00:28:05.416 Latency(us) 00:28:05.416 Device Information : IOPS MiB/s Average min max 00:28:05.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10871.40 42.47 5889.11 2274.09 10211.21 00:28:05.417 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11056.70 43.19 5788.28 2237.47 9750.71 00:28:05.417 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10989.60 42.93 5825.21 2504.09 8910.27 00:28:05.417 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10920.80 42.66 5860.46 1785.79 12887.74 00:28:05.417 ======================================================== 00:28:05.417 Total : 43838.49 171.24 5840.52 1785.79 12887.74 00:28:05.417 00:28:05.417 10:36:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:28:05.417 10:36:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:05.417 10:36:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:28:05.417 10:36:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:05.417 10:36:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:28:05.417 10:36:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:05.417 10:36:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:05.417 rmmod nvme_tcp 00:28:05.417 rmmod nvme_fabrics 00:28:05.417 rmmod nvme_keyring 00:28:05.417 10:36:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:05.417 10:36:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:28:05.417 10:36:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:28:05.417 10:36:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2509330 ']' 00:28:05.417 10:36:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2509330 00:28:05.417 10:36:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2509330 ']' 00:28:05.417 10:36:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2509330 00:28:05.417 10:36:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:28:05.417 10:36:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:05.417 10:36:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2509330 00:28:05.417 10:36:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:05.417 10:36:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:05.417 10:36:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2509330' 00:28:05.417 killing process with pid 2509330 00:28:05.417 10:36:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2509330 00:28:05.417 10:36:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2509330 00:28:05.417 10:36:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:05.417 10:36:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:05.417 10:36:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:05.417 10:36:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:05.417 10:36:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:05.417 10:36:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.417 10:36:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:05.417 10:36:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.325 10:36:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:07.325 10:36:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:28:07.325 10:36:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:28:08.703 10:36:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:28:10.607 10:36:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:15.883 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:15.883 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:15.883 Found net devices under 0000:86:00.0: cvl_0_0 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:15.883 Found net devices under 0000:86:00.1: cvl_0_1 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:15.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:15.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:28:15.883 00:28:15.883 --- 10.0.0.2 ping statistics --- 00:28:15.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.883 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:28:15.883 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:15.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:15.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:28:15.884 00:28:15.884 --- 10.0.0.1 ping statistics --- 00:28:15.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.884 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:15.884 net.core.busy_poll = 1 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:15.884 net.core.busy_read = 1 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2513133 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2513133 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2513133 ']' 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:15.884 10:37:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.884 [2024-07-14 10:37:00.856067] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:15.884 [2024-07-14 10:37:00.856111] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:16.142 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.142 [2024-07-14 10:37:00.926670] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:16.142 [2024-07-14 10:37:00.969204] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:16.142 [2024-07-14 10:37:00.969248] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:16.142 [2024-07-14 10:37:00.969272] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:16.142 [2024-07-14 10:37:00.969279] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:16.142 [2024-07-14 10:37:00.969284] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:16.142 [2024-07-14 10:37:00.969346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.142 [2024-07-14 10:37:00.969453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:16.142 [2024-07-14 10:37:00.969556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.142 [2024-07-14 10:37:00.969558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:16.708 10:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:16.708 10:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:28:16.708 10:37:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:16.709 10:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:16.709 10:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:16.968 [2024-07-14 10:37:01.837051] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:16.968 Malloc1 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:16.968 [2024-07-14 10:37:01.888863] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2513388 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:28:16.968 10:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:16.968 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.500 10:37:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:28:19.500 10:37:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.500 10:37:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.500 10:37:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.500 10:37:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:28:19.500 "tick_rate": 2300000000, 00:28:19.500 "poll_groups": [ 00:28:19.500 { 00:28:19.500 "name": "nvmf_tgt_poll_group_000", 00:28:19.500 "admin_qpairs": 1, 00:28:19.500 "io_qpairs": 2, 00:28:19.500 "current_admin_qpairs": 1, 00:28:19.500 "current_io_qpairs": 2, 00:28:19.500 "pending_bdev_io": 0, 00:28:19.500 "completed_nvme_io": 29180, 00:28:19.500 "transports": [ 00:28:19.500 { 00:28:19.500 "trtype": "TCP" 00:28:19.500 } 00:28:19.500 ] 00:28:19.500 }, 00:28:19.500 { 00:28:19.500 "name": "nvmf_tgt_poll_group_001", 00:28:19.500 "admin_qpairs": 0, 00:28:19.500 "io_qpairs": 2, 00:28:19.500 "current_admin_qpairs": 0, 00:28:19.500 "current_io_qpairs": 2, 00:28:19.500 "pending_bdev_io": 0, 00:28:19.500 "completed_nvme_io": 29802, 00:28:19.500 "transports": [ 00:28:19.500 { 00:28:19.500 "trtype": "TCP" 00:28:19.500 } 00:28:19.500 ] 00:28:19.500 }, 00:28:19.500 { 00:28:19.500 "name": "nvmf_tgt_poll_group_002", 00:28:19.500 "admin_qpairs": 0, 00:28:19.500 "io_qpairs": 0, 00:28:19.500 "current_admin_qpairs": 0, 00:28:19.500 "current_io_qpairs": 0, 00:28:19.500 "pending_bdev_io": 0, 00:28:19.500 "completed_nvme_io": 0, 00:28:19.500 "transports": [ 00:28:19.500 { 00:28:19.500 "trtype": "TCP" 00:28:19.500 } 00:28:19.500 ] 00:28:19.500 }, 00:28:19.500 { 00:28:19.500 "name": "nvmf_tgt_poll_group_003", 00:28:19.500 "admin_qpairs": 0, 00:28:19.500 "io_qpairs": 0, 00:28:19.500 "current_admin_qpairs": 0, 00:28:19.500 "current_io_qpairs": 0, 00:28:19.500 "pending_bdev_io": 0, 00:28:19.500 "completed_nvme_io": 0, 00:28:19.500 "transports": [ 00:28:19.500 { 00:28:19.500 "trtype": "TCP" 00:28:19.500 } 00:28:19.500 ] 00:28:19.500 } 00:28:19.500 ] 00:28:19.500 }' 00:28:19.500 10:37:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:19.500 10:37:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:28:19.500 10:37:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:28:19.500 10:37:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:28:19.500 10:37:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2513388 00:28:27.643 Initializing NVMe Controllers 00:28:27.643 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:27.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:27.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:27.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:27.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:27.643 Initialization complete. Launching workers. 00:28:27.643 ======================================================== 00:28:27.643 Latency(us) 00:28:27.643 Device Information : IOPS MiB/s Average min max 00:28:27.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10138.29 39.60 6331.29 1498.09 52608.64 00:28:27.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5639.89 22.03 11380.89 1569.26 53700.39 00:28:27.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7986.29 31.20 8039.32 1418.77 53812.51 00:28:27.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7622.69 29.78 8395.72 1450.36 52672.57 00:28:27.643 ======================================================== 00:28:27.643 Total : 31387.17 122.61 8174.61 1418.77 53812.51 00:28:27.643 00:28:27.643 10:37:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:28:27.643 10:37:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:27.643 10:37:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:28:27.643 10:37:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:27.643 10:37:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:28:27.643 10:37:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:27.643 10:37:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:27.643 rmmod nvme_tcp 00:28:27.643 rmmod nvme_fabrics 00:28:27.643 rmmod nvme_keyring 00:28:27.643 10:37:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:27.643 10:37:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:28:27.643 10:37:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:28:27.643 10:37:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2513133 ']' 00:28:27.643 10:37:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2513133 00:28:27.643 10:37:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2513133 ']' 00:28:27.643 10:37:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2513133 00:28:27.643 10:37:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:28:27.643 10:37:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:27.643 10:37:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2513133 00:28:27.643 10:37:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:27.643 10:37:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:27.643 10:37:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2513133' 00:28:27.643 killing process with pid 2513133 00:28:27.643 10:37:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2513133 00:28:27.643 10:37:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2513133 00:28:27.643 10:37:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:27.644 10:37:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:27.644 10:37:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:27.644 10:37:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:27.644 10:37:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:27.644 10:37:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.644 10:37:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:27.644 10:37:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.934 10:37:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:30.934 10:37:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:30.934 00:28:30.934 real 0m50.943s 00:28:30.934 user 2m49.478s 00:28:30.934 sys 0m9.561s 00:28:30.934 10:37:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:30.934 10:37:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.934 ************************************ 00:28:30.934 END TEST nvmf_perf_adq 00:28:30.934 ************************************ 00:28:30.934 10:37:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:30.934 10:37:15 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:30.934 10:37:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:30.934 10:37:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:30.934 10:37:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:30.934 ************************************ 00:28:30.934 START TEST nvmf_shutdown 00:28:30.934 ************************************ 00:28:30.934 10:37:15 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:30.934 * Looking for test storage... 00:28:30.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:30.934 10:37:15 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:30.934 10:37:15 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:30.934 10:37:15 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:30.934 10:37:15 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:30.934 10:37:15 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:30.934 10:37:15 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:30.934 10:37:15 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:30.934 10:37:15 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:30.934 10:37:15 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:30.934 10:37:15 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:30.934 10:37:15 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:30.934 10:37:15 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:30.935 ************************************ 00:28:30.935 START TEST nvmf_shutdown_tc1 00:28:30.935 ************************************ 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:30.935 10:37:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:37.504 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:37.504 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:37.504 Found net devices under 0000:86:00.0: cvl_0_0 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:37.504 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:37.505 Found net devices under 0000:86:00.1: cvl_0_1 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:37.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:37.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:28:37.505 00:28:37.505 --- 10.0.0.2 ping statistics --- 00:28:37.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.505 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:37.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:37.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:28:37.505 00:28:37.505 --- 10.0.0.1 ping statistics --- 00:28:37.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.505 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2518765 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2518765 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2518765 ']' 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:37.505 10:37:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:37.505 [2024-07-14 10:37:21.576102] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:37.505 [2024-07-14 10:37:21.576154] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:37.505 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.505 [2024-07-14 10:37:21.649617] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:37.505 [2024-07-14 10:37:21.691653] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:37.505 [2024-07-14 10:37:21.691694] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:37.505 [2024-07-14 10:37:21.691701] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:37.505 [2024-07-14 10:37:21.691707] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:37.505 [2024-07-14 10:37:21.691712] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:37.505 [2024-07-14 10:37:21.691824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:37.505 [2024-07-14 10:37:21.691951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:37.505 [2024-07-14 10:37:21.692057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.505 [2024-07-14 10:37:21.692058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:37.505 [2024-07-14 10:37:22.436278] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:37.505 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:37.763 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:37.763 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:37.763 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:37.763 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:37.763 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:37.763 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.763 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:37.763 Malloc1 00:28:37.763 [2024-07-14 10:37:22.531934] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:37.763 Malloc2 00:28:37.763 Malloc3 00:28:37.763 Malloc4 00:28:37.763 Malloc5 00:28:37.763 Malloc6 00:28:38.022 Malloc7 00:28:38.022 Malloc8 00:28:38.022 Malloc9 00:28:38.022 Malloc10 00:28:38.022 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.022 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:38.022 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:38.022 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:38.022 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2519042 00:28:38.022 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2519042 /var/tmp/bdevperf.sock 00:28:38.022 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2519042 ']' 00:28:38.022 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:38.022 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:38.022 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:38.022 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:38.022 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:38.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:38.022 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:28:38.022 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:38.022 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:28:38.022 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:38.022 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:38.022 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:38.022 { 00:28:38.022 "params": { 00:28:38.022 "name": "Nvme$subsystem", 00:28:38.022 "trtype": "$TEST_TRANSPORT", 00:28:38.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.022 "adrfam": "ipv4", 00:28:38.022 "trsvcid": "$NVMF_PORT", 00:28:38.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.022 "hdgst": ${hdgst:-false}, 00:28:38.022 "ddgst": ${ddgst:-false} 00:28:38.022 }, 00:28:38.022 "method": "bdev_nvme_attach_controller" 00:28:38.022 } 00:28:38.022 EOF 00:28:38.022 )") 00:28:38.022 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:38.022 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:38.022 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:38.022 { 00:28:38.022 "params": { 00:28:38.022 "name": "Nvme$subsystem", 00:28:38.022 "trtype": "$TEST_TRANSPORT", 00:28:38.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.022 "adrfam": "ipv4", 00:28:38.022 "trsvcid": "$NVMF_PORT", 00:28:38.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.022 "hdgst": ${hdgst:-false}, 00:28:38.022 "ddgst": ${ddgst:-false} 00:28:38.023 }, 00:28:38.023 "method": "bdev_nvme_attach_controller" 00:28:38.023 } 00:28:38.023 EOF 00:28:38.023 )") 00:28:38.023 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:38.023 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:38.023 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:38.023 { 00:28:38.023 "params": { 00:28:38.023 "name": "Nvme$subsystem", 00:28:38.023 "trtype": "$TEST_TRANSPORT", 00:28:38.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.023 "adrfam": "ipv4", 00:28:38.023 "trsvcid": "$NVMF_PORT", 00:28:38.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.023 "hdgst": ${hdgst:-false}, 00:28:38.023 "ddgst": ${ddgst:-false} 00:28:38.023 }, 00:28:38.023 "method": "bdev_nvme_attach_controller" 00:28:38.023 } 00:28:38.023 EOF 00:28:38.023 )") 00:28:38.023 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:38.023 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:38.023 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:38.023 { 00:28:38.023 "params": { 00:28:38.023 "name": "Nvme$subsystem", 00:28:38.023 "trtype": "$TEST_TRANSPORT", 00:28:38.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.023 "adrfam": "ipv4", 00:28:38.023 "trsvcid": "$NVMF_PORT", 00:28:38.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.023 "hdgst": ${hdgst:-false}, 00:28:38.023 "ddgst": ${ddgst:-false} 00:28:38.023 }, 00:28:38.023 "method": "bdev_nvme_attach_controller" 00:28:38.023 } 00:28:38.023 EOF 00:28:38.023 )") 00:28:38.023 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:38.023 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:38.023 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:38.023 { 00:28:38.023 "params": { 00:28:38.023 "name": "Nvme$subsystem", 00:28:38.023 "trtype": "$TEST_TRANSPORT", 00:28:38.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.023 "adrfam": "ipv4", 00:28:38.023 "trsvcid": "$NVMF_PORT", 00:28:38.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.023 "hdgst": ${hdgst:-false}, 00:28:38.023 "ddgst": ${ddgst:-false} 00:28:38.023 }, 00:28:38.023 "method": "bdev_nvme_attach_controller" 00:28:38.023 } 00:28:38.023 EOF 00:28:38.023 )") 00:28:38.023 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:38.023 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:38.023 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:38.023 { 00:28:38.023 "params": { 00:28:38.023 "name": "Nvme$subsystem", 00:28:38.023 "trtype": "$TEST_TRANSPORT", 00:28:38.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.023 "adrfam": "ipv4", 00:28:38.023 "trsvcid": "$NVMF_PORT", 00:28:38.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.023 "hdgst": ${hdgst:-false}, 00:28:38.023 "ddgst": ${ddgst:-false} 00:28:38.023 }, 00:28:38.023 "method": "bdev_nvme_attach_controller" 00:28:38.023 } 00:28:38.023 EOF 00:28:38.023 )") 00:28:38.023 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:38.023 [2024-07-14 10:37:23.000055] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:38.023 [2024-07-14 10:37:23.000104] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:38.023 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:38.023 10:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:38.023 { 00:28:38.023 "params": { 00:28:38.023 "name": "Nvme$subsystem", 00:28:38.023 "trtype": "$TEST_TRANSPORT", 00:28:38.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.023 "adrfam": "ipv4", 00:28:38.023 "trsvcid": "$NVMF_PORT", 00:28:38.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.023 "hdgst": ${hdgst:-false}, 00:28:38.023 "ddgst": ${ddgst:-false} 00:28:38.023 }, 00:28:38.023 "method": "bdev_nvme_attach_controller" 00:28:38.023 } 00:28:38.023 EOF 00:28:38.023 )") 00:28:38.023 10:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:38.282 10:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:38.282 10:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:38.282 { 00:28:38.282 "params": { 00:28:38.282 "name": "Nvme$subsystem", 00:28:38.282 "trtype": "$TEST_TRANSPORT", 00:28:38.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.282 "adrfam": "ipv4", 00:28:38.282 "trsvcid": "$NVMF_PORT", 00:28:38.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.282 "hdgst": ${hdgst:-false}, 00:28:38.282 "ddgst": ${ddgst:-false} 00:28:38.282 }, 00:28:38.282 "method": "bdev_nvme_attach_controller" 00:28:38.282 } 00:28:38.282 EOF 00:28:38.282 )") 00:28:38.282 10:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:38.282 10:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:38.282 10:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:38.282 { 00:28:38.282 "params": { 00:28:38.282 "name": "Nvme$subsystem", 00:28:38.282 "trtype": "$TEST_TRANSPORT", 00:28:38.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.282 "adrfam": "ipv4", 00:28:38.282 "trsvcid": "$NVMF_PORT", 00:28:38.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.282 "hdgst": ${hdgst:-false}, 00:28:38.282 "ddgst": ${ddgst:-false} 00:28:38.282 }, 00:28:38.282 "method": "bdev_nvme_attach_controller" 00:28:38.282 } 00:28:38.282 EOF 00:28:38.282 )") 00:28:38.282 10:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:38.282 10:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:38.282 10:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:38.282 { 00:28:38.282 "params": { 00:28:38.282 "name": "Nvme$subsystem", 00:28:38.282 "trtype": "$TEST_TRANSPORT", 00:28:38.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.282 "adrfam": "ipv4", 00:28:38.282 "trsvcid": "$NVMF_PORT", 00:28:38.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.282 "hdgst": ${hdgst:-false}, 00:28:38.282 "ddgst": ${ddgst:-false} 00:28:38.282 }, 00:28:38.282 "method": "bdev_nvme_attach_controller" 00:28:38.282 } 00:28:38.282 EOF 00:28:38.282 )") 00:28:38.282 10:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:38.282 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.282 10:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:28:38.282 10:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:28:38.282 10:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:38.282 "params": { 00:28:38.282 "name": "Nvme1", 00:28:38.282 "trtype": "tcp", 00:28:38.282 "traddr": "10.0.0.2", 00:28:38.282 "adrfam": "ipv4", 00:28:38.282 "trsvcid": "4420", 00:28:38.282 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:38.283 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:38.283 "hdgst": false, 00:28:38.283 "ddgst": false 00:28:38.283 }, 00:28:38.283 "method": "bdev_nvme_attach_controller" 00:28:38.283 },{ 00:28:38.283 "params": { 00:28:38.283 "name": "Nvme2", 00:28:38.283 "trtype": "tcp", 00:28:38.283 "traddr": "10.0.0.2", 00:28:38.283 "adrfam": "ipv4", 00:28:38.283 "trsvcid": "4420", 00:28:38.283 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:38.283 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:38.283 "hdgst": false, 00:28:38.283 "ddgst": false 00:28:38.283 }, 00:28:38.283 "method": "bdev_nvme_attach_controller" 00:28:38.283 },{ 00:28:38.283 "params": { 00:28:38.283 "name": "Nvme3", 00:28:38.283 "trtype": "tcp", 00:28:38.283 "traddr": "10.0.0.2", 00:28:38.283 "adrfam": "ipv4", 00:28:38.283 "trsvcid": "4420", 00:28:38.283 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:38.283 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:38.283 "hdgst": false, 00:28:38.283 "ddgst": false 00:28:38.283 }, 00:28:38.283 "method": "bdev_nvme_attach_controller" 00:28:38.283 },{ 00:28:38.283 "params": { 00:28:38.283 "name": "Nvme4", 00:28:38.283 "trtype": "tcp", 00:28:38.283 "traddr": "10.0.0.2", 00:28:38.283 "adrfam": "ipv4", 00:28:38.283 "trsvcid": "4420", 00:28:38.283 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:38.283 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:38.283 "hdgst": false, 00:28:38.283 "ddgst": false 00:28:38.283 }, 00:28:38.283 "method": "bdev_nvme_attach_controller" 00:28:38.283 },{ 00:28:38.283 "params": { 00:28:38.283 "name": "Nvme5", 00:28:38.283 "trtype": "tcp", 00:28:38.283 "traddr": "10.0.0.2", 00:28:38.283 "adrfam": "ipv4", 00:28:38.283 "trsvcid": "4420", 00:28:38.283 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:38.283 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:38.283 "hdgst": false, 00:28:38.283 "ddgst": false 00:28:38.283 }, 00:28:38.283 "method": "bdev_nvme_attach_controller" 00:28:38.283 },{ 00:28:38.283 "params": { 00:28:38.283 "name": "Nvme6", 00:28:38.283 "trtype": "tcp", 00:28:38.283 "traddr": "10.0.0.2", 00:28:38.283 "adrfam": "ipv4", 00:28:38.283 "trsvcid": "4420", 00:28:38.283 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:38.283 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:38.283 "hdgst": false, 00:28:38.283 "ddgst": false 00:28:38.283 }, 00:28:38.283 "method": "bdev_nvme_attach_controller" 00:28:38.283 },{ 00:28:38.283 "params": { 00:28:38.283 "name": "Nvme7", 00:28:38.283 "trtype": "tcp", 00:28:38.283 "traddr": "10.0.0.2", 00:28:38.283 "adrfam": "ipv4", 00:28:38.283 "trsvcid": "4420", 00:28:38.283 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:38.283 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:38.283 "hdgst": false, 00:28:38.283 "ddgst": false 00:28:38.283 }, 00:28:38.283 "method": "bdev_nvme_attach_controller" 00:28:38.283 },{ 00:28:38.283 "params": { 00:28:38.283 "name": "Nvme8", 00:28:38.283 "trtype": "tcp", 00:28:38.283 "traddr": "10.0.0.2", 00:28:38.283 "adrfam": "ipv4", 00:28:38.283 "trsvcid": "4420", 00:28:38.283 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:38.283 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:38.283 "hdgst": false, 00:28:38.283 "ddgst": false 00:28:38.283 }, 00:28:38.283 "method": "bdev_nvme_attach_controller" 00:28:38.283 },{ 00:28:38.283 "params": { 00:28:38.283 "name": "Nvme9", 00:28:38.283 "trtype": "tcp", 00:28:38.283 "traddr": "10.0.0.2", 00:28:38.283 "adrfam": "ipv4", 00:28:38.283 "trsvcid": "4420", 00:28:38.283 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:38.283 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:38.283 "hdgst": false, 00:28:38.283 "ddgst": false 00:28:38.283 }, 00:28:38.283 "method": "bdev_nvme_attach_controller" 00:28:38.283 },{ 00:28:38.283 "params": { 00:28:38.283 "name": "Nvme10", 00:28:38.283 "trtype": "tcp", 00:28:38.283 "traddr": "10.0.0.2", 00:28:38.283 "adrfam": "ipv4", 00:28:38.283 "trsvcid": "4420", 00:28:38.283 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:38.283 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:38.283 "hdgst": false, 00:28:38.283 "ddgst": false 00:28:38.283 }, 00:28:38.283 "method": "bdev_nvme_attach_controller" 00:28:38.283 }' 00:28:38.283 [2024-07-14 10:37:23.068531] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.283 [2024-07-14 10:37:23.108306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.189 10:37:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:40.189 10:37:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:28:40.189 10:37:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:40.189 10:37:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.189 10:37:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:40.189 10:37:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.189 10:37:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2519042 00:28:40.189 10:37:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:28:40.189 10:37:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:28:41.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2519042 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:41.127 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2518765 00:28:41.127 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:41.127 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:41.127 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:28:41.127 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:28:41.127 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:41.127 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:41.127 { 00:28:41.127 "params": { 00:28:41.127 "name": "Nvme$subsystem", 00:28:41.127 "trtype": "$TEST_TRANSPORT", 00:28:41.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.127 "adrfam": "ipv4", 00:28:41.127 "trsvcid": "$NVMF_PORT", 00:28:41.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.127 "hdgst": ${hdgst:-false}, 00:28:41.127 "ddgst": ${ddgst:-false} 00:28:41.127 }, 00:28:41.127 "method": "bdev_nvme_attach_controller" 00:28:41.127 } 00:28:41.127 EOF 00:28:41.127 )") 00:28:41.127 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:41.127 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:41.127 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:41.127 { 00:28:41.127 "params": { 00:28:41.127 "name": "Nvme$subsystem", 00:28:41.127 "trtype": "$TEST_TRANSPORT", 00:28:41.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.127 "adrfam": "ipv4", 00:28:41.127 "trsvcid": "$NVMF_PORT", 00:28:41.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.127 "hdgst": ${hdgst:-false}, 00:28:41.127 "ddgst": ${ddgst:-false} 00:28:41.127 }, 00:28:41.127 "method": "bdev_nvme_attach_controller" 00:28:41.127 } 00:28:41.127 EOF 00:28:41.127 )") 00:28:41.127 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:41.127 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:41.127 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:41.127 { 00:28:41.127 "params": { 00:28:41.127 "name": "Nvme$subsystem", 00:28:41.128 "trtype": "$TEST_TRANSPORT", 00:28:41.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.128 "adrfam": "ipv4", 00:28:41.128 "trsvcid": "$NVMF_PORT", 00:28:41.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.128 "hdgst": ${hdgst:-false}, 00:28:41.128 "ddgst": ${ddgst:-false} 00:28:41.128 }, 00:28:41.128 "method": "bdev_nvme_attach_controller" 00:28:41.128 } 00:28:41.128 EOF 00:28:41.128 )") 00:28:41.128 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:41.128 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:41.128 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:41.128 { 00:28:41.128 "params": { 00:28:41.128 "name": "Nvme$subsystem", 00:28:41.128 "trtype": "$TEST_TRANSPORT", 00:28:41.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.128 "adrfam": "ipv4", 00:28:41.128 "trsvcid": "$NVMF_PORT", 00:28:41.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.128 "hdgst": ${hdgst:-false}, 00:28:41.128 "ddgst": ${ddgst:-false} 00:28:41.128 }, 00:28:41.128 "method": "bdev_nvme_attach_controller" 00:28:41.128 } 00:28:41.128 EOF 00:28:41.128 )") 00:28:41.128 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:41.128 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:41.128 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:41.128 { 00:28:41.128 "params": { 00:28:41.128 "name": "Nvme$subsystem", 00:28:41.128 "trtype": "$TEST_TRANSPORT", 00:28:41.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.128 "adrfam": "ipv4", 00:28:41.128 "trsvcid": "$NVMF_PORT", 00:28:41.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.128 "hdgst": ${hdgst:-false}, 00:28:41.128 "ddgst": ${ddgst:-false} 00:28:41.128 }, 00:28:41.128 "method": "bdev_nvme_attach_controller" 00:28:41.128 } 00:28:41.128 EOF 00:28:41.128 )") 00:28:41.128 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:41.128 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:41.128 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:41.128 { 00:28:41.128 "params": { 00:28:41.128 "name": "Nvme$subsystem", 00:28:41.128 "trtype": "$TEST_TRANSPORT", 00:28:41.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.128 "adrfam": "ipv4", 00:28:41.128 "trsvcid": "$NVMF_PORT", 00:28:41.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.128 "hdgst": ${hdgst:-false}, 00:28:41.128 "ddgst": ${ddgst:-false} 00:28:41.128 }, 00:28:41.128 "method": "bdev_nvme_attach_controller" 00:28:41.128 } 00:28:41.128 EOF 00:28:41.128 )") 00:28:41.128 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:41.128 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:41.128 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:41.128 { 00:28:41.128 "params": { 00:28:41.128 "name": "Nvme$subsystem", 00:28:41.128 "trtype": "$TEST_TRANSPORT", 00:28:41.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.128 "adrfam": "ipv4", 00:28:41.128 "trsvcid": "$NVMF_PORT", 00:28:41.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.128 "hdgst": ${hdgst:-false}, 00:28:41.128 "ddgst": ${ddgst:-false} 00:28:41.128 }, 00:28:41.128 "method": "bdev_nvme_attach_controller" 00:28:41.128 } 00:28:41.128 EOF 00:28:41.128 )") 00:28:41.128 [2024-07-14 10:37:25.894032] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:41.128 [2024-07-14 10:37:25.894081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2519551 ] 00:28:41.128 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:41.128 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:41.128 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:41.128 { 00:28:41.128 "params": { 00:28:41.128 "name": "Nvme$subsystem", 00:28:41.128 "trtype": "$TEST_TRANSPORT", 00:28:41.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.128 "adrfam": "ipv4", 00:28:41.128 "trsvcid": "$NVMF_PORT", 00:28:41.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.128 "hdgst": ${hdgst:-false}, 00:28:41.128 "ddgst": ${ddgst:-false} 00:28:41.128 }, 00:28:41.128 "method": "bdev_nvme_attach_controller" 00:28:41.128 } 00:28:41.128 EOF 00:28:41.128 )") 00:28:41.128 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:41.128 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:41.128 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:41.128 { 00:28:41.128 "params": { 00:28:41.128 "name": "Nvme$subsystem", 00:28:41.128 "trtype": "$TEST_TRANSPORT", 00:28:41.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.128 "adrfam": "ipv4", 00:28:41.128 "trsvcid": "$NVMF_PORT", 00:28:41.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.128 "hdgst": ${hdgst:-false}, 00:28:41.128 "ddgst": ${ddgst:-false} 00:28:41.128 }, 00:28:41.128 "method": "bdev_nvme_attach_controller" 00:28:41.128 } 00:28:41.128 EOF 00:28:41.128 )") 00:28:41.128 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:41.128 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:41.128 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:41.128 { 00:28:41.128 "params": { 00:28:41.128 "name": "Nvme$subsystem", 00:28:41.128 "trtype": "$TEST_TRANSPORT", 00:28:41.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.128 "adrfam": "ipv4", 00:28:41.128 "trsvcid": "$NVMF_PORT", 00:28:41.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.128 "hdgst": ${hdgst:-false}, 00:28:41.128 "ddgst": ${ddgst:-false} 00:28:41.128 }, 00:28:41.128 "method": "bdev_nvme_attach_controller" 00:28:41.128 } 00:28:41.128 EOF 00:28:41.129 )") 00:28:41.129 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:41.129 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:28:41.129 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.129 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:28:41.129 10:37:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:41.129 "params": { 00:28:41.129 "name": "Nvme1", 00:28:41.129 "trtype": "tcp", 00:28:41.129 "traddr": "10.0.0.2", 00:28:41.129 "adrfam": "ipv4", 00:28:41.129 "trsvcid": "4420", 00:28:41.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:41.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:41.129 "hdgst": false, 00:28:41.129 "ddgst": false 00:28:41.129 }, 00:28:41.129 "method": "bdev_nvme_attach_controller" 00:28:41.129 },{ 00:28:41.129 "params": { 00:28:41.129 "name": "Nvme2", 00:28:41.129 "trtype": "tcp", 00:28:41.129 "traddr": "10.0.0.2", 00:28:41.129 "adrfam": "ipv4", 00:28:41.129 "trsvcid": "4420", 00:28:41.129 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:41.129 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:41.129 "hdgst": false, 00:28:41.129 "ddgst": false 00:28:41.129 }, 00:28:41.129 "method": "bdev_nvme_attach_controller" 00:28:41.129 },{ 00:28:41.129 "params": { 00:28:41.129 "name": "Nvme3", 00:28:41.129 "trtype": "tcp", 00:28:41.129 "traddr": "10.0.0.2", 00:28:41.129 "adrfam": "ipv4", 00:28:41.129 "trsvcid": "4420", 00:28:41.129 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:41.129 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:41.129 "hdgst": false, 00:28:41.129 "ddgst": false 00:28:41.129 }, 00:28:41.129 "method": "bdev_nvme_attach_controller" 00:28:41.129 },{ 00:28:41.129 "params": { 00:28:41.129 "name": "Nvme4", 00:28:41.129 "trtype": "tcp", 00:28:41.129 "traddr": "10.0.0.2", 00:28:41.129 "adrfam": "ipv4", 00:28:41.129 "trsvcid": "4420", 00:28:41.129 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:41.129 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:41.129 "hdgst": false, 00:28:41.129 "ddgst": false 00:28:41.129 }, 00:28:41.129 "method": "bdev_nvme_attach_controller" 00:28:41.129 },{ 00:28:41.129 "params": { 00:28:41.129 "name": "Nvme5", 00:28:41.129 "trtype": "tcp", 00:28:41.129 "traddr": "10.0.0.2", 00:28:41.129 "adrfam": "ipv4", 00:28:41.129 "trsvcid": "4420", 00:28:41.129 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:41.129 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:41.129 "hdgst": false, 00:28:41.129 "ddgst": false 00:28:41.129 }, 00:28:41.129 "method": "bdev_nvme_attach_controller" 00:28:41.129 },{ 00:28:41.129 "params": { 00:28:41.129 "name": "Nvme6", 00:28:41.129 "trtype": "tcp", 00:28:41.129 "traddr": "10.0.0.2", 00:28:41.129 "adrfam": "ipv4", 00:28:41.129 "trsvcid": "4420", 00:28:41.129 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:41.129 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:41.129 "hdgst": false, 00:28:41.129 "ddgst": false 00:28:41.129 }, 00:28:41.129 "method": "bdev_nvme_attach_controller" 00:28:41.129 },{ 00:28:41.129 "params": { 00:28:41.129 "name": "Nvme7", 00:28:41.129 "trtype": "tcp", 00:28:41.129 "traddr": "10.0.0.2", 00:28:41.129 "adrfam": "ipv4", 00:28:41.129 "trsvcid": "4420", 00:28:41.129 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:41.129 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:41.129 "hdgst": false, 00:28:41.129 "ddgst": false 00:28:41.129 }, 00:28:41.129 "method": "bdev_nvme_attach_controller" 00:28:41.129 },{ 00:28:41.129 "params": { 00:28:41.129 "name": "Nvme8", 00:28:41.129 "trtype": "tcp", 00:28:41.129 "traddr": "10.0.0.2", 00:28:41.129 "adrfam": "ipv4", 00:28:41.129 "trsvcid": "4420", 00:28:41.129 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:41.129 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:41.129 "hdgst": false, 00:28:41.129 "ddgst": false 00:28:41.129 }, 00:28:41.129 "method": "bdev_nvme_attach_controller" 00:28:41.129 },{ 00:28:41.129 "params": { 00:28:41.129 "name": "Nvme9", 00:28:41.129 "trtype": "tcp", 00:28:41.129 "traddr": "10.0.0.2", 00:28:41.129 "adrfam": "ipv4", 00:28:41.129 "trsvcid": "4420", 00:28:41.129 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:41.129 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:41.129 "hdgst": false, 00:28:41.129 "ddgst": false 00:28:41.129 }, 00:28:41.129 "method": "bdev_nvme_attach_controller" 00:28:41.129 },{ 00:28:41.129 "params": { 00:28:41.129 "name": "Nvme10", 00:28:41.129 "trtype": "tcp", 00:28:41.129 "traddr": "10.0.0.2", 00:28:41.129 "adrfam": "ipv4", 00:28:41.129 "trsvcid": "4420", 00:28:41.129 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:41.129 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:41.129 "hdgst": false, 00:28:41.129 "ddgst": false 00:28:41.129 }, 00:28:41.129 "method": "bdev_nvme_attach_controller" 00:28:41.129 }' 00:28:41.129 [2024-07-14 10:37:25.964085] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.129 [2024-07-14 10:37:26.005092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.506 Running I/O for 1 seconds... 00:28:43.886 00:28:43.886 Latency(us) 00:28:43.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.886 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:43.886 Verification LBA range: start 0x0 length 0x400 00:28:43.886 Nvme1n1 : 1.05 243.62 15.23 0.00 0.00 260359.57 16412.49 218833.25 00:28:43.886 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:43.886 Verification LBA range: start 0x0 length 0x400 00:28:43.886 Nvme2n1 : 1.05 243.19 15.20 0.00 0.00 256898.23 27240.18 203332.56 00:28:43.886 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:43.886 Verification LBA range: start 0x0 length 0x400 00:28:43.886 Nvme3n1 : 1.10 293.27 18.33 0.00 0.00 209481.09 3447.76 215186.03 00:28:43.886 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:43.886 Verification LBA range: start 0x0 length 0x400 00:28:43.886 Nvme4n1 : 1.08 296.90 18.56 0.00 0.00 204123.45 15158.76 209715.20 00:28:43.886 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:43.886 Verification LBA range: start 0x0 length 0x400 00:28:43.886 Nvme5n1 : 1.11 287.80 17.99 0.00 0.00 207812.88 16412.49 211538.81 00:28:43.886 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:43.886 Verification LBA range: start 0x0 length 0x400 00:28:43.886 Nvme6n1 : 1.12 284.67 17.79 0.00 0.00 207060.64 16184.54 220656.86 00:28:43.886 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:43.886 Verification LBA range: start 0x0 length 0x400 00:28:43.886 Nvme7n1 : 1.12 292.59 18.29 0.00 0.00 198018.41 1638.40 213362.42 00:28:43.886 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:43.886 Verification LBA range: start 0x0 length 0x400 00:28:43.886 Nvme8n1 : 1.12 290.86 18.18 0.00 0.00 195974.28 1210.99 217009.64 00:28:43.886 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:43.886 Verification LBA range: start 0x0 length 0x400 00:28:43.886 Nvme9n1 : 1.15 278.78 17.42 0.00 0.00 202346.05 13563.10 223392.28 00:28:43.886 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:43.886 Verification LBA range: start 0x0 length 0x400 00:28:43.886 Nvme10n1 : 1.16 331.28 20.70 0.00 0.00 167973.21 5271.37 242540.19 00:28:43.886 =================================================================================================================== 00:28:43.886 Total : 2842.96 177.69 0.00 0.00 208139.24 1210.99 242540.19 00:28:43.886 10:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:28:43.886 10:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:43.886 10:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:43.886 10:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:43.886 10:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:43.886 10:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:43.886 10:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:28:43.886 10:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:43.886 10:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:28:43.886 10:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:43.886 10:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:43.886 rmmod nvme_tcp 00:28:43.886 rmmod nvme_fabrics 00:28:43.886 rmmod nvme_keyring 00:28:43.886 10:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:43.886 10:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:28:43.886 10:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:28:43.886 10:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2518765 ']' 00:28:43.886 10:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2518765 00:28:43.886 10:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 2518765 ']' 00:28:43.886 10:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 2518765 00:28:43.886 10:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:28:43.886 10:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:43.886 10:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2518765 00:28:44.145 10:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:44.145 10:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:44.145 10:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2518765' 00:28:44.145 killing process with pid 2518765 00:28:44.145 10:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 2518765 00:28:44.145 10:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 2518765 00:28:44.405 10:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:44.405 10:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:44.405 10:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:44.405 10:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:44.405 10:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:44.405 10:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.405 10:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:44.405 10:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:46.943 00:28:46.943 real 0m15.599s 00:28:46.943 user 0m36.014s 00:28:46.943 sys 0m5.652s 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:46.943 ************************************ 00:28:46.943 END TEST nvmf_shutdown_tc1 00:28:46.943 ************************************ 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:46.943 ************************************ 00:28:46.943 START TEST nvmf_shutdown_tc2 00:28:46.943 ************************************ 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:46.943 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:46.943 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:46.943 Found net devices under 0000:86:00.0: cvl_0_0 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:46.943 Found net devices under 0000:86:00.1: cvl_0_1 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:46.943 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:46.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:46.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:28:46.944 00:28:46.944 --- 10.0.0.2 ping statistics --- 00:28:46.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.944 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:46.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:46.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:28:46.944 00:28:46.944 --- 10.0.0.1 ping statistics --- 00:28:46.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.944 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2520623 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2520623 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2520623 ']' 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:46.944 10:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:46.944 [2024-07-14 10:37:31.730144] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:46.944 [2024-07-14 10:37:31.730186] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.944 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.944 [2024-07-14 10:37:31.802127] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:46.944 [2024-07-14 10:37:31.843391] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:46.944 [2024-07-14 10:37:31.843430] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:46.944 [2024-07-14 10:37:31.843437] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:46.944 [2024-07-14 10:37:31.843443] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:46.944 [2024-07-14 10:37:31.843448] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:46.944 [2024-07-14 10:37:31.843589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:46.944 [2024-07-14 10:37:31.843697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:46.944 [2024-07-14 10:37:31.843802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:46.944 [2024-07-14 10:37:31.843803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.882 [2024-07-14 10:37:32.579267] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.882 10:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.882 Malloc1 00:28:47.882 [2024-07-14 10:37:32.675009] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:47.882 Malloc2 00:28:47.882 Malloc3 00:28:47.882 Malloc4 00:28:47.882 Malloc5 00:28:47.882 Malloc6 00:28:48.141 Malloc7 00:28:48.141 Malloc8 00:28:48.141 Malloc9 00:28:48.141 Malloc10 00:28:48.141 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.141 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:48.141 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:48.141 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:48.141 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2520895 00:28:48.141 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2520895 /var/tmp/bdevperf.sock 00:28:48.141 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2520895 ']' 00:28:48.141 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:48.141 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:48.141 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:48.141 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:48.141 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:48.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:48.141 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:28:48.141 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:48.141 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:28:48.141 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:48.141 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:48.141 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:48.141 { 00:28:48.141 "params": { 00:28:48.141 "name": "Nvme$subsystem", 00:28:48.141 "trtype": "$TEST_TRANSPORT", 00:28:48.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.141 "adrfam": "ipv4", 00:28:48.141 "trsvcid": "$NVMF_PORT", 00:28:48.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.141 "hdgst": ${hdgst:-false}, 00:28:48.141 "ddgst": ${ddgst:-false} 00:28:48.141 }, 00:28:48.141 "method": "bdev_nvme_attach_controller" 00:28:48.141 } 00:28:48.141 EOF 00:28:48.141 )") 00:28:48.141 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:48.141 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:48.141 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:48.141 { 00:28:48.141 "params": { 00:28:48.141 "name": "Nvme$subsystem", 00:28:48.141 "trtype": "$TEST_TRANSPORT", 00:28:48.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.141 "adrfam": "ipv4", 00:28:48.141 "trsvcid": "$NVMF_PORT", 00:28:48.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.141 "hdgst": ${hdgst:-false}, 00:28:48.141 "ddgst": ${ddgst:-false} 00:28:48.141 }, 00:28:48.141 "method": "bdev_nvme_attach_controller" 00:28:48.141 } 00:28:48.141 EOF 00:28:48.141 )") 00:28:48.141 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:48.141 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:48.141 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:48.141 { 00:28:48.141 "params": { 00:28:48.141 "name": "Nvme$subsystem", 00:28:48.141 "trtype": "$TEST_TRANSPORT", 00:28:48.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.141 "adrfam": "ipv4", 00:28:48.141 "trsvcid": "$NVMF_PORT", 00:28:48.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.141 "hdgst": ${hdgst:-false}, 00:28:48.141 "ddgst": ${ddgst:-false} 00:28:48.141 }, 00:28:48.141 "method": "bdev_nvme_attach_controller" 00:28:48.141 } 00:28:48.141 EOF 00:28:48.141 )") 00:28:48.141 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:48.141 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:48.141 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:48.141 { 00:28:48.141 "params": { 00:28:48.141 "name": "Nvme$subsystem", 00:28:48.141 "trtype": "$TEST_TRANSPORT", 00:28:48.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.141 "adrfam": "ipv4", 00:28:48.141 "trsvcid": "$NVMF_PORT", 00:28:48.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.141 "hdgst": ${hdgst:-false}, 00:28:48.141 "ddgst": ${ddgst:-false} 00:28:48.141 }, 00:28:48.141 "method": "bdev_nvme_attach_controller" 00:28:48.141 } 00:28:48.141 EOF 00:28:48.141 )") 00:28:48.141 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:48.410 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:48.410 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:48.410 { 00:28:48.410 "params": { 00:28:48.410 "name": "Nvme$subsystem", 00:28:48.410 "trtype": "$TEST_TRANSPORT", 00:28:48.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.410 "adrfam": "ipv4", 00:28:48.410 "trsvcid": "$NVMF_PORT", 00:28:48.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.410 "hdgst": ${hdgst:-false}, 00:28:48.410 "ddgst": ${ddgst:-false} 00:28:48.410 }, 00:28:48.410 "method": "bdev_nvme_attach_controller" 00:28:48.410 } 00:28:48.410 EOF 00:28:48.410 )") 00:28:48.410 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:48.410 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:48.410 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:48.410 { 00:28:48.410 "params": { 00:28:48.410 "name": "Nvme$subsystem", 00:28:48.410 "trtype": "$TEST_TRANSPORT", 00:28:48.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.410 "adrfam": "ipv4", 00:28:48.410 "trsvcid": "$NVMF_PORT", 00:28:48.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.410 "hdgst": ${hdgst:-false}, 00:28:48.410 "ddgst": ${ddgst:-false} 00:28:48.410 }, 00:28:48.410 "method": "bdev_nvme_attach_controller" 00:28:48.410 } 00:28:48.410 EOF 00:28:48.410 )") 00:28:48.410 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:48.410 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:48.410 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:48.410 { 00:28:48.410 "params": { 00:28:48.410 "name": "Nvme$subsystem", 00:28:48.410 "trtype": "$TEST_TRANSPORT", 00:28:48.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.410 "adrfam": "ipv4", 00:28:48.410 "trsvcid": "$NVMF_PORT", 00:28:48.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.410 "hdgst": ${hdgst:-false}, 00:28:48.410 "ddgst": ${ddgst:-false} 00:28:48.410 }, 00:28:48.410 "method": "bdev_nvme_attach_controller" 00:28:48.410 } 00:28:48.410 EOF 00:28:48.410 )") 00:28:48.410 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:48.410 [2024-07-14 10:37:33.141643] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:48.410 [2024-07-14 10:37:33.141692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2520895 ] 00:28:48.410 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:48.410 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:48.410 { 00:28:48.410 "params": { 00:28:48.410 "name": "Nvme$subsystem", 00:28:48.410 "trtype": "$TEST_TRANSPORT", 00:28:48.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.410 "adrfam": "ipv4", 00:28:48.410 "trsvcid": "$NVMF_PORT", 00:28:48.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.410 "hdgst": ${hdgst:-false}, 00:28:48.410 "ddgst": ${ddgst:-false} 00:28:48.410 }, 00:28:48.410 "method": "bdev_nvme_attach_controller" 00:28:48.410 } 00:28:48.410 EOF 00:28:48.410 )") 00:28:48.410 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:48.410 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:48.410 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:48.410 { 00:28:48.410 "params": { 00:28:48.410 "name": "Nvme$subsystem", 00:28:48.410 "trtype": "$TEST_TRANSPORT", 00:28:48.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.410 "adrfam": "ipv4", 00:28:48.410 "trsvcid": "$NVMF_PORT", 00:28:48.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.410 "hdgst": ${hdgst:-false}, 00:28:48.410 "ddgst": ${ddgst:-false} 00:28:48.410 }, 00:28:48.410 "method": "bdev_nvme_attach_controller" 00:28:48.410 } 00:28:48.410 EOF 00:28:48.410 )") 00:28:48.410 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:48.410 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:48.410 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:48.410 { 00:28:48.410 "params": { 00:28:48.410 "name": "Nvme$subsystem", 00:28:48.410 "trtype": "$TEST_TRANSPORT", 00:28:48.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.410 "adrfam": "ipv4", 00:28:48.410 "trsvcid": "$NVMF_PORT", 00:28:48.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.410 "hdgst": ${hdgst:-false}, 00:28:48.410 "ddgst": ${ddgst:-false} 00:28:48.410 }, 00:28:48.410 "method": "bdev_nvme_attach_controller" 00:28:48.410 } 00:28:48.410 EOF 00:28:48.410 )") 00:28:48.410 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:48.410 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:28:48.410 EAL: No free 2048 kB hugepages reported on node 1 00:28:48.410 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:28:48.410 10:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:48.410 "params": { 00:28:48.410 "name": "Nvme1", 00:28:48.410 "trtype": "tcp", 00:28:48.410 "traddr": "10.0.0.2", 00:28:48.410 "adrfam": "ipv4", 00:28:48.410 "trsvcid": "4420", 00:28:48.410 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:48.410 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:48.410 "hdgst": false, 00:28:48.410 "ddgst": false 00:28:48.410 }, 00:28:48.410 "method": "bdev_nvme_attach_controller" 00:28:48.410 },{ 00:28:48.410 "params": { 00:28:48.410 "name": "Nvme2", 00:28:48.410 "trtype": "tcp", 00:28:48.410 "traddr": "10.0.0.2", 00:28:48.410 "adrfam": "ipv4", 00:28:48.410 "trsvcid": "4420", 00:28:48.410 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:48.410 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:48.410 "hdgst": false, 00:28:48.410 "ddgst": false 00:28:48.410 }, 00:28:48.410 "method": "bdev_nvme_attach_controller" 00:28:48.410 },{ 00:28:48.410 "params": { 00:28:48.410 "name": "Nvme3", 00:28:48.410 "trtype": "tcp", 00:28:48.410 "traddr": "10.0.0.2", 00:28:48.410 "adrfam": "ipv4", 00:28:48.410 "trsvcid": "4420", 00:28:48.410 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:48.410 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:48.410 "hdgst": false, 00:28:48.410 "ddgst": false 00:28:48.410 }, 00:28:48.410 "method": "bdev_nvme_attach_controller" 00:28:48.410 },{ 00:28:48.410 "params": { 00:28:48.410 "name": "Nvme4", 00:28:48.410 "trtype": "tcp", 00:28:48.410 "traddr": "10.0.0.2", 00:28:48.410 "adrfam": "ipv4", 00:28:48.410 "trsvcid": "4420", 00:28:48.410 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:48.410 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:48.410 "hdgst": false, 00:28:48.410 "ddgst": false 00:28:48.410 }, 00:28:48.410 "method": "bdev_nvme_attach_controller" 00:28:48.410 },{ 00:28:48.410 "params": { 00:28:48.410 "name": "Nvme5", 00:28:48.410 "trtype": "tcp", 00:28:48.410 "traddr": "10.0.0.2", 00:28:48.410 "adrfam": "ipv4", 00:28:48.410 "trsvcid": "4420", 00:28:48.410 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:48.410 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:48.410 "hdgst": false, 00:28:48.410 "ddgst": false 00:28:48.410 }, 00:28:48.410 "method": "bdev_nvme_attach_controller" 00:28:48.410 },{ 00:28:48.410 "params": { 00:28:48.410 "name": "Nvme6", 00:28:48.410 "trtype": "tcp", 00:28:48.410 "traddr": "10.0.0.2", 00:28:48.410 "adrfam": "ipv4", 00:28:48.410 "trsvcid": "4420", 00:28:48.410 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:48.410 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:48.410 "hdgst": false, 00:28:48.410 "ddgst": false 00:28:48.410 }, 00:28:48.410 "method": "bdev_nvme_attach_controller" 00:28:48.410 },{ 00:28:48.411 "params": { 00:28:48.411 "name": "Nvme7", 00:28:48.411 "trtype": "tcp", 00:28:48.411 "traddr": "10.0.0.2", 00:28:48.411 "adrfam": "ipv4", 00:28:48.411 "trsvcid": "4420", 00:28:48.411 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:48.411 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:48.411 "hdgst": false, 00:28:48.411 "ddgst": false 00:28:48.411 }, 00:28:48.411 "method": "bdev_nvme_attach_controller" 00:28:48.411 },{ 00:28:48.411 "params": { 00:28:48.411 "name": "Nvme8", 00:28:48.411 "trtype": "tcp", 00:28:48.411 "traddr": "10.0.0.2", 00:28:48.411 "adrfam": "ipv4", 00:28:48.411 "trsvcid": "4420", 00:28:48.411 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:48.411 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:48.411 "hdgst": false, 00:28:48.411 "ddgst": false 00:28:48.411 }, 00:28:48.411 "method": "bdev_nvme_attach_controller" 00:28:48.411 },{ 00:28:48.411 "params": { 00:28:48.411 "name": "Nvme9", 00:28:48.411 "trtype": "tcp", 00:28:48.411 "traddr": "10.0.0.2", 00:28:48.411 "adrfam": "ipv4", 00:28:48.411 "trsvcid": "4420", 00:28:48.411 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:48.411 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:48.411 "hdgst": false, 00:28:48.411 "ddgst": false 00:28:48.411 }, 00:28:48.411 "method": "bdev_nvme_attach_controller" 00:28:48.411 },{ 00:28:48.411 "params": { 00:28:48.411 "name": "Nvme10", 00:28:48.411 "trtype": "tcp", 00:28:48.411 "traddr": "10.0.0.2", 00:28:48.411 "adrfam": "ipv4", 00:28:48.411 "trsvcid": "4420", 00:28:48.411 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:48.411 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:48.411 "hdgst": false, 00:28:48.411 "ddgst": false 00:28:48.411 }, 00:28:48.411 "method": "bdev_nvme_attach_controller" 00:28:48.411 }' 00:28:48.411 [2024-07-14 10:37:33.210424] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.411 [2024-07-14 10:37:33.250528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.314 Running I/O for 10 seconds... 00:28:50.314 10:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:50.314 10:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:28:50.314 10:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:50.314 10:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.314 10:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:50.314 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.314 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:50.314 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:50.314 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:50.314 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:28:50.314 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:28:50.314 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:50.314 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:50.314 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:50.314 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:50.314 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.314 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:50.314 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.314 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:28:50.314 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:28:50.314 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:50.572 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:50.572 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:50.572 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:50.572 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:50.572 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.573 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:50.573 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.573 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=86 00:28:50.573 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 86 -ge 100 ']' 00:28:50.573 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:50.834 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:50.834 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:50.834 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:50.834 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:50.834 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.834 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:50.834 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.834 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:28:50.834 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:28:50.834 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:28:50.834 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:28:50.834 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:28:50.834 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2520895 00:28:50.834 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2520895 ']' 00:28:50.834 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2520895 00:28:50.834 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:28:50.834 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:50.834 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2520895 00:28:50.834 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:50.834 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:50.834 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2520895' 00:28:50.834 killing process with pid 2520895 00:28:50.834 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2520895 00:28:50.834 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2520895 00:28:50.834 Received shutdown signal, test time was about 0.901920 seconds 00:28:50.834 00:28:50.834 Latency(us) 00:28:50.834 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.834 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:50.834 Verification LBA range: start 0x0 length 0x400 00:28:50.834 Nvme1n1 : 0.88 289.97 18.12 0.00 0.00 218155.41 23137.06 201508.95 00:28:50.834 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:50.834 Verification LBA range: start 0x0 length 0x400 00:28:50.834 Nvme2n1 : 0.89 294.46 18.40 0.00 0.00 209729.23 6895.53 198773.54 00:28:50.834 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:50.834 Verification LBA range: start 0x0 length 0x400 00:28:50.834 Nvme3n1 : 0.88 292.10 18.26 0.00 0.00 208625.09 19945.74 211538.81 00:28:50.834 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:50.834 Verification LBA range: start 0x0 length 0x400 00:28:50.834 Nvme4n1 : 0.89 291.67 18.23 0.00 0.00 205052.71 2194.03 213362.42 00:28:50.834 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:50.834 Verification LBA range: start 0x0 length 0x400 00:28:50.834 Nvme5n1 : 0.90 288.49 18.03 0.00 0.00 203590.98 2137.04 221568.67 00:28:50.834 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:50.834 Verification LBA range: start 0x0 length 0x400 00:28:50.834 Nvme6n1 : 0.90 284.71 17.79 0.00 0.00 202494.44 18122.13 217921.45 00:28:50.834 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:50.834 Verification LBA range: start 0x0 length 0x400 00:28:50.834 Nvme7n1 : 0.89 287.09 17.94 0.00 0.00 196610.67 17894.18 217921.45 00:28:50.834 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:50.834 Verification LBA range: start 0x0 length 0x400 00:28:50.834 Nvme8n1 : 0.89 289.10 18.07 0.00 0.00 191141.62 15272.74 217921.45 00:28:50.834 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:50.834 Verification LBA range: start 0x0 length 0x400 00:28:50.834 Nvme9n1 : 0.87 221.05 13.82 0.00 0.00 244160.19 32597.04 225215.89 00:28:50.834 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:50.834 Verification LBA range: start 0x0 length 0x400 00:28:50.834 Nvme10n1 : 0.87 220.58 13.79 0.00 0.00 239571.18 17894.18 240716.58 00:28:50.834 =================================================================================================================== 00:28:50.834 Total : 2759.21 172.45 0.00 0.00 210313.63 2137.04 240716.58 00:28:51.112 10:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:28:52.060 10:37:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2520623 00:28:52.060 10:37:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:28:52.060 10:37:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:52.061 10:37:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:52.061 10:37:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:52.061 10:37:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:52.061 10:37:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:52.061 10:37:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:28:52.061 10:37:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:52.061 10:37:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:28:52.061 10:37:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:52.061 10:37:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:52.061 rmmod nvme_tcp 00:28:52.061 rmmod nvme_fabrics 00:28:52.061 rmmod nvme_keyring 00:28:52.061 10:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:52.061 10:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:28:52.061 10:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:28:52.061 10:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2520623 ']' 00:28:52.061 10:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2520623 00:28:52.061 10:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2520623 ']' 00:28:52.061 10:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2520623 00:28:52.061 10:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:28:52.061 10:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:52.061 10:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2520623 00:28:52.320 10:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:52.320 10:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:52.320 10:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2520623' 00:28:52.320 killing process with pid 2520623 00:28:52.320 10:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2520623 00:28:52.320 10:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2520623 00:28:52.580 10:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:52.580 10:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:52.580 10:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:52.580 10:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:52.580 10:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:52.580 10:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.580 10:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:52.580 10:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:55.119 00:28:55.119 real 0m8.124s 00:28:55.119 user 0m25.086s 00:28:55.119 sys 0m1.320s 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:55.119 ************************************ 00:28:55.119 END TEST nvmf_shutdown_tc2 00:28:55.119 ************************************ 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:55.119 ************************************ 00:28:55.119 START TEST nvmf_shutdown_tc3 00:28:55.119 ************************************ 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:55.119 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:55.119 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:55.119 Found net devices under 0000:86:00.0: cvl_0_0 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:55.119 Found net devices under 0000:86:00.1: cvl_0_1 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:55.119 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:55.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:55.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:28:55.120 00:28:55.120 --- 10.0.0.2 ping statistics --- 00:28:55.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.120 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:55.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:55.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:28:55.120 00:28:55.120 --- 10.0.0.1 ping statistics --- 00:28:55.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.120 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2521956 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2521956 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2521956 ']' 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:55.120 10:37:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:55.120 [2024-07-14 10:37:39.928594] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:55.120 [2024-07-14 10:37:39.928640] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.120 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.120 [2024-07-14 10:37:39.982408] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:55.120 [2024-07-14 10:37:40.027289] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:55.120 [2024-07-14 10:37:40.027323] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:55.120 [2024-07-14 10:37:40.027330] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:55.120 [2024-07-14 10:37:40.027337] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:55.120 [2024-07-14 10:37:40.027342] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:55.120 [2024-07-14 10:37:40.027405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:55.120 [2024-07-14 10:37:40.027433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:55.120 [2024-07-14 10:37:40.027555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.120 [2024-07-14 10:37:40.027556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:55.380 [2024-07-14 10:37:40.174216] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.380 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:55.380 Malloc1 00:28:55.380 [2024-07-14 10:37:40.270089] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:55.380 Malloc2 00:28:55.380 Malloc3 00:28:55.640 Malloc4 00:28:55.640 Malloc5 00:28:55.640 Malloc6 00:28:55.640 Malloc7 00:28:55.640 Malloc8 00:28:55.640 Malloc9 00:28:55.901 Malloc10 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2522229 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2522229 /var/tmp/bdevperf.sock 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2522229 ']' 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:55.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:55.901 { 00:28:55.901 "params": { 00:28:55.901 "name": "Nvme$subsystem", 00:28:55.901 "trtype": "$TEST_TRANSPORT", 00:28:55.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.901 "adrfam": "ipv4", 00:28:55.901 "trsvcid": "$NVMF_PORT", 00:28:55.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.901 "hdgst": ${hdgst:-false}, 00:28:55.901 "ddgst": ${ddgst:-false} 00:28:55.901 }, 00:28:55.901 "method": "bdev_nvme_attach_controller" 00:28:55.901 } 00:28:55.901 EOF 00:28:55.901 )") 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:55.901 { 00:28:55.901 "params": { 00:28:55.901 "name": "Nvme$subsystem", 00:28:55.901 "trtype": "$TEST_TRANSPORT", 00:28:55.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.901 "adrfam": "ipv4", 00:28:55.901 "trsvcid": "$NVMF_PORT", 00:28:55.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.901 "hdgst": ${hdgst:-false}, 00:28:55.901 "ddgst": ${ddgst:-false} 00:28:55.901 }, 00:28:55.901 "method": "bdev_nvme_attach_controller" 00:28:55.901 } 00:28:55.901 EOF 00:28:55.901 )") 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:55.901 { 00:28:55.901 "params": { 00:28:55.901 "name": "Nvme$subsystem", 00:28:55.901 "trtype": "$TEST_TRANSPORT", 00:28:55.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.901 "adrfam": "ipv4", 00:28:55.901 "trsvcid": "$NVMF_PORT", 00:28:55.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.901 "hdgst": ${hdgst:-false}, 00:28:55.901 "ddgst": ${ddgst:-false} 00:28:55.901 }, 00:28:55.901 "method": "bdev_nvme_attach_controller" 00:28:55.901 } 00:28:55.901 EOF 00:28:55.901 )") 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:55.901 { 00:28:55.901 "params": { 00:28:55.901 "name": "Nvme$subsystem", 00:28:55.901 "trtype": "$TEST_TRANSPORT", 00:28:55.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.901 "adrfam": "ipv4", 00:28:55.901 "trsvcid": "$NVMF_PORT", 00:28:55.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.901 "hdgst": ${hdgst:-false}, 00:28:55.901 "ddgst": ${ddgst:-false} 00:28:55.901 }, 00:28:55.901 "method": "bdev_nvme_attach_controller" 00:28:55.901 } 00:28:55.901 EOF 00:28:55.901 )") 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:55.901 { 00:28:55.901 "params": { 00:28:55.901 "name": "Nvme$subsystem", 00:28:55.901 "trtype": "$TEST_TRANSPORT", 00:28:55.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.901 "adrfam": "ipv4", 00:28:55.901 "trsvcid": "$NVMF_PORT", 00:28:55.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.901 "hdgst": ${hdgst:-false}, 00:28:55.901 "ddgst": ${ddgst:-false} 00:28:55.901 }, 00:28:55.901 "method": "bdev_nvme_attach_controller" 00:28:55.901 } 00:28:55.901 EOF 00:28:55.901 )") 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:55.901 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:55.901 { 00:28:55.901 "params": { 00:28:55.901 "name": "Nvme$subsystem", 00:28:55.901 "trtype": "$TEST_TRANSPORT", 00:28:55.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.901 "adrfam": "ipv4", 00:28:55.902 "trsvcid": "$NVMF_PORT", 00:28:55.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.902 "hdgst": ${hdgst:-false}, 00:28:55.902 "ddgst": ${ddgst:-false} 00:28:55.902 }, 00:28:55.902 "method": "bdev_nvme_attach_controller" 00:28:55.902 } 00:28:55.902 EOF 00:28:55.902 )") 00:28:55.902 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:55.902 [2024-07-14 10:37:40.736550] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:55.902 [2024-07-14 10:37:40.736596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2522229 ] 00:28:55.902 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:55.902 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:55.902 { 00:28:55.902 "params": { 00:28:55.902 "name": "Nvme$subsystem", 00:28:55.902 "trtype": "$TEST_TRANSPORT", 00:28:55.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.902 "adrfam": "ipv4", 00:28:55.902 "trsvcid": "$NVMF_PORT", 00:28:55.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.902 "hdgst": ${hdgst:-false}, 00:28:55.902 "ddgst": ${ddgst:-false} 00:28:55.902 }, 00:28:55.902 "method": "bdev_nvme_attach_controller" 00:28:55.902 } 00:28:55.902 EOF 00:28:55.902 )") 00:28:55.902 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:55.902 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:55.902 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:55.902 { 00:28:55.902 "params": { 00:28:55.902 "name": "Nvme$subsystem", 00:28:55.902 "trtype": "$TEST_TRANSPORT", 00:28:55.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.902 "adrfam": "ipv4", 00:28:55.902 "trsvcid": "$NVMF_PORT", 00:28:55.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.902 "hdgst": ${hdgst:-false}, 00:28:55.902 "ddgst": ${ddgst:-false} 00:28:55.902 }, 00:28:55.902 "method": "bdev_nvme_attach_controller" 00:28:55.902 } 00:28:55.902 EOF 00:28:55.902 )") 00:28:55.902 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:55.902 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:55.902 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:55.902 { 00:28:55.902 "params": { 00:28:55.902 "name": "Nvme$subsystem", 00:28:55.902 "trtype": "$TEST_TRANSPORT", 00:28:55.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.902 "adrfam": "ipv4", 00:28:55.902 "trsvcid": "$NVMF_PORT", 00:28:55.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.902 "hdgst": ${hdgst:-false}, 00:28:55.902 "ddgst": ${ddgst:-false} 00:28:55.902 }, 00:28:55.902 "method": "bdev_nvme_attach_controller" 00:28:55.902 } 00:28:55.902 EOF 00:28:55.902 )") 00:28:55.902 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:55.902 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:55.902 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:55.902 { 00:28:55.902 "params": { 00:28:55.902 "name": "Nvme$subsystem", 00:28:55.902 "trtype": "$TEST_TRANSPORT", 00:28:55.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.902 "adrfam": "ipv4", 00:28:55.902 "trsvcid": "$NVMF_PORT", 00:28:55.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.902 "hdgst": ${hdgst:-false}, 00:28:55.902 "ddgst": ${ddgst:-false} 00:28:55.902 }, 00:28:55.902 "method": "bdev_nvme_attach_controller" 00:28:55.902 } 00:28:55.902 EOF 00:28:55.902 )") 00:28:55.902 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:55.902 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.902 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:28:55.902 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:28:55.902 10:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:55.902 "params": { 00:28:55.902 "name": "Nvme1", 00:28:55.902 "trtype": "tcp", 00:28:55.902 "traddr": "10.0.0.2", 00:28:55.902 "adrfam": "ipv4", 00:28:55.902 "trsvcid": "4420", 00:28:55.902 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:55.902 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:55.902 "hdgst": false, 00:28:55.902 "ddgst": false 00:28:55.902 }, 00:28:55.902 "method": "bdev_nvme_attach_controller" 00:28:55.902 },{ 00:28:55.902 "params": { 00:28:55.902 "name": "Nvme2", 00:28:55.902 "trtype": "tcp", 00:28:55.902 "traddr": "10.0.0.2", 00:28:55.902 "adrfam": "ipv4", 00:28:55.902 "trsvcid": "4420", 00:28:55.902 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:55.902 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:55.902 "hdgst": false, 00:28:55.902 "ddgst": false 00:28:55.902 }, 00:28:55.902 "method": "bdev_nvme_attach_controller" 00:28:55.902 },{ 00:28:55.902 "params": { 00:28:55.902 "name": "Nvme3", 00:28:55.902 "trtype": "tcp", 00:28:55.902 "traddr": "10.0.0.2", 00:28:55.902 "adrfam": "ipv4", 00:28:55.902 "trsvcid": "4420", 00:28:55.902 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:55.902 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:55.902 "hdgst": false, 00:28:55.902 "ddgst": false 00:28:55.902 }, 00:28:55.902 "method": "bdev_nvme_attach_controller" 00:28:55.902 },{ 00:28:55.902 "params": { 00:28:55.902 "name": "Nvme4", 00:28:55.902 "trtype": "tcp", 00:28:55.902 "traddr": "10.0.0.2", 00:28:55.902 "adrfam": "ipv4", 00:28:55.902 "trsvcid": "4420", 00:28:55.902 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:55.902 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:55.902 "hdgst": false, 00:28:55.902 "ddgst": false 00:28:55.902 }, 00:28:55.902 "method": "bdev_nvme_attach_controller" 00:28:55.902 },{ 00:28:55.902 "params": { 00:28:55.902 "name": "Nvme5", 00:28:55.902 "trtype": "tcp", 00:28:55.902 "traddr": "10.0.0.2", 00:28:55.902 "adrfam": "ipv4", 00:28:55.902 "trsvcid": "4420", 00:28:55.902 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:55.902 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:55.902 "hdgst": false, 00:28:55.902 "ddgst": false 00:28:55.902 }, 00:28:55.902 "method": "bdev_nvme_attach_controller" 00:28:55.902 },{ 00:28:55.902 "params": { 00:28:55.902 "name": "Nvme6", 00:28:55.902 "trtype": "tcp", 00:28:55.902 "traddr": "10.0.0.2", 00:28:55.902 "adrfam": "ipv4", 00:28:55.902 "trsvcid": "4420", 00:28:55.902 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:55.902 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:55.902 "hdgst": false, 00:28:55.902 "ddgst": false 00:28:55.902 }, 00:28:55.902 "method": "bdev_nvme_attach_controller" 00:28:55.902 },{ 00:28:55.902 "params": { 00:28:55.902 "name": "Nvme7", 00:28:55.902 "trtype": "tcp", 00:28:55.902 "traddr": "10.0.0.2", 00:28:55.902 "adrfam": "ipv4", 00:28:55.902 "trsvcid": "4420", 00:28:55.902 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:55.902 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:55.902 "hdgst": false, 00:28:55.902 "ddgst": false 00:28:55.902 }, 00:28:55.902 "method": "bdev_nvme_attach_controller" 00:28:55.902 },{ 00:28:55.902 "params": { 00:28:55.902 "name": "Nvme8", 00:28:55.902 "trtype": "tcp", 00:28:55.902 "traddr": "10.0.0.2", 00:28:55.902 "adrfam": "ipv4", 00:28:55.902 "trsvcid": "4420", 00:28:55.902 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:55.902 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:55.902 "hdgst": false, 00:28:55.902 "ddgst": false 00:28:55.902 }, 00:28:55.902 "method": "bdev_nvme_attach_controller" 00:28:55.902 },{ 00:28:55.902 "params": { 00:28:55.902 "name": "Nvme9", 00:28:55.902 "trtype": "tcp", 00:28:55.902 "traddr": "10.0.0.2", 00:28:55.902 "adrfam": "ipv4", 00:28:55.902 "trsvcid": "4420", 00:28:55.902 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:55.902 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:55.902 "hdgst": false, 00:28:55.902 "ddgst": false 00:28:55.902 }, 00:28:55.902 "method": "bdev_nvme_attach_controller" 00:28:55.902 },{ 00:28:55.902 "params": { 00:28:55.902 "name": "Nvme10", 00:28:55.902 "trtype": "tcp", 00:28:55.902 "traddr": "10.0.0.2", 00:28:55.902 "adrfam": "ipv4", 00:28:55.902 "trsvcid": "4420", 00:28:55.902 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:55.902 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:55.902 "hdgst": false, 00:28:55.902 "ddgst": false 00:28:55.902 }, 00:28:55.902 "method": "bdev_nvme_attach_controller" 00:28:55.902 }' 00:28:55.902 [2024-07-14 10:37:40.807345] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.902 [2024-07-14 10:37:40.846951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.808 Running I/O for 10 seconds... 00:28:57.808 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:57.808 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:28:57.808 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:57.808 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.808 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:57.808 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.808 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:57.808 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:57.808 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:57.808 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:57.808 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:28:57.808 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:28:57.808 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:57.808 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:57.808 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:57.808 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:57.808 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.808 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:57.808 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.808 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:28:57.808 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:28:57.808 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:57.808 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:58.067 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:58.067 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:58.067 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:58.067 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.067 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:58.067 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.067 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:58.067 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:58.067 10:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:58.342 10:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:58.342 10:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:58.342 10:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:58.342 10:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:58.342 10:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.342 10:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:58.342 10:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.342 10:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:58.342 10:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:58.342 10:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:28:58.342 10:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:28:58.342 10:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:28:58.342 10:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2521956 00:28:58.342 10:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 2521956 ']' 00:28:58.342 10:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 2521956 00:28:58.342 10:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:28:58.342 10:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:58.342 10:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2521956 00:28:58.342 10:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:58.342 10:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:58.342 10:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2521956' 00:28:58.342 killing process with pid 2521956 00:28:58.342 10:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 2521956 00:28:58.342 10:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 2521956 00:28:58.342 [2024-07-14 10:37:43.188038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.342 [2024-07-14 10:37:43.188364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.188370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.188376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.188382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.188387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.188393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.188399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.188405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.188411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.188417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.188423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.188429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.188435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.188443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.188449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.188454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.188462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.188469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.188475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d09d0 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189757] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189775] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189870] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189925] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.189985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0e70 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.191050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.191075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.191083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.191090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.191097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.191103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.343 [2024-07-14 10:37:43.191109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.191457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1330 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.344 [2024-07-14 10:37:43.192356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192455] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.192549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d17d0 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.345 [2024-07-14 10:37:43.194267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.194462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2110 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.346 [2024-07-14 10:37:43.195460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.195466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.195472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.195478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.195483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.195491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.195496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.195502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.195508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.195514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.195520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.195525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.195531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.195537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.195542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.195548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.195554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.195559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.195565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.195571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.195577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.195583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.195589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.195595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.195600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.195606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.195612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.195618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d25d0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.197328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.347 [2024-07-14 10:37:43.197360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.347 [2024-07-14 10:37:43.197370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.347 [2024-07-14 10:37:43.197377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.347 [2024-07-14 10:37:43.197392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.347 [2024-07-14 10:37:43.197399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.347 [2024-07-14 10:37:43.197406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.347 [2024-07-14 10:37:43.197413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.347 [2024-07-14 10:37:43.197421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d58c0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.197450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.347 [2024-07-14 10:37:43.197458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.347 [2024-07-14 10:37:43.197465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.347 [2024-07-14 10:37:43.197472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.347 [2024-07-14 10:37:43.197479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.347 [2024-07-14 10:37:43.197486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.347 [2024-07-14 10:37:43.197493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.347 [2024-07-14 10:37:43.197500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.347 [2024-07-14 10:37:43.197506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec9d0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.197531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.347 [2024-07-14 10:37:43.197539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.347 [2024-07-14 10:37:43.197547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.347 [2024-07-14 10:37:43.197553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.347 [2024-07-14 10:37:43.197560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.347 [2024-07-14 10:37:43.197566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.347 [2024-07-14 10:37:43.197573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.347 [2024-07-14 10:37:43.197579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.347 [2024-07-14 10:37:43.197585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2f610 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.197608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.347 [2024-07-14 10:37:43.197615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.347 [2024-07-14 10:37:43.197624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.347 [2024-07-14 10:37:43.197631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.347 [2024-07-14 10:37:43.197639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.347 [2024-07-14 10:37:43.197646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.347 [2024-07-14 10:37:43.197653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.347 [2024-07-14 10:37:43.197659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.347 [2024-07-14 10:37:43.197666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d5b10 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.197689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.347 [2024-07-14 10:37:43.197697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.347 [2024-07-14 10:37:43.197704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.347 [2024-07-14 10:37:43.197711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.347 [2024-07-14 10:37:43.197718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.347 [2024-07-14 10:37:43.197724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.347 [2024-07-14 10:37:43.197731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.347 [2024-07-14 10:37:43.197738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.347 [2024-07-14 10:37:43.197744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10463a0 is same with the state(5) to be set 00:28:58.347 [2024-07-14 10:37:43.197766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.347 [2024-07-14 10:37:43.197775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.347 [2024-07-14 10:37:43.197782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.347 [2024-07-14 10:37:43.197788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.347 [2024-07-14 10:37:43.197795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.347 [2024-07-14 10:37:43.197801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.347 [2024-07-14 10:37:43.197808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.347 [2024-07-14 10:37:43.197814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.347 [2024-07-14 10:37:43.197820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1067c30 is same with the state(5) to be set 00:28:58.348 [2024-07-14 10:37:43.197843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.348 [2024-07-14 10:37:43.197852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.197859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.348 [2024-07-14 10:37:43.197866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.197875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.348 [2024-07-14 10:37:43.197882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.197889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.348 [2024-07-14 10:37:43.197895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.197901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9a20 is same with the state(5) to be set 00:28:58.348 [2024-07-14 10:37:43.197924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.348 [2024-07-14 10:37:43.197932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.197939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.348 [2024-07-14 10:37:43.197945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.197952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.348 [2024-07-14 10:37:43.197958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.197965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.348 [2024-07-14 10:37:43.197972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.197978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23d70 is same with the state(5) to be set 00:28:58.348 [2024-07-14 10:37:43.197999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.348 [2024-07-14 10:37:43.198006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.348 [2024-07-14 10:37:43.198020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.348 [2024-07-14 10:37:43.198033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.348 [2024-07-14 10:37:43.198046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10376b0 is same with the state(5) to be set 00:28:58.348 [2024-07-14 10:37:43.198075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.348 [2024-07-14 10:37:43.198083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.348 [2024-07-14 10:37:43.198096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.348 [2024-07-14 10:37:43.198109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.348 [2024-07-14 10:37:43.198123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1203210 is same with the state(5) to be set 00:28:58.348 [2024-07-14 10:37:43.198340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.348 [2024-07-14 10:37:43.198359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.348 [2024-07-14 10:37:43.198380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.348 [2024-07-14 10:37:43.198395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.348 [2024-07-14 10:37:43.198410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.348 [2024-07-14 10:37:43.198424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.348 [2024-07-14 10:37:43.198438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.348 [2024-07-14 10:37:43.198452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.348 [2024-07-14 10:37:43.198466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.348 [2024-07-14 10:37:43.198484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.348 [2024-07-14 10:37:43.198498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.348 [2024-07-14 10:37:43.198512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.348 [2024-07-14 10:37:43.198526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.348 [2024-07-14 10:37:43.198541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.348 [2024-07-14 10:37:43.198555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.348 [2024-07-14 10:37:43.198569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.348 [2024-07-14 10:37:43.198583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.348 [2024-07-14 10:37:43.198598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.348 [2024-07-14 10:37:43.198612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.348 [2024-07-14 10:37:43.198627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.348 [2024-07-14 10:37:43.198641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.348 [2024-07-14 10:37:43.198657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.348 [2024-07-14 10:37:43.198672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.348 [2024-07-14 10:37:43.198686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.348 [2024-07-14 10:37:43.198700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.348 [2024-07-14 10:37:43.198708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.348 [2024-07-14 10:37:43.198714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.198722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.198729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.198737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.198743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.198751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.198757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.198767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.198774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.198782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.198789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.198796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.198802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.198810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.198817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.198825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.198831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.198841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.198847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.198855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.198862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.198870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.198876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.198884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.198891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.198898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.198905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.198913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.198919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.198927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.198933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.198942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.198948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.198956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.198963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.198971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.198978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.198986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.198992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.199001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.199008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.199018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.199026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.199034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.199040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.199048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.199055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.199063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.199069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.199077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.199083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.199091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.199098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.199106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.199112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.199120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.199126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.199134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.199140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.199148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.199155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.199162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.199169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.199177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.199183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.199191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.199197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.199206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.199212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.199220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.199231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.199242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.199248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.199257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.199264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.199271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.199278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.199286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.199292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.199319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.349 [2024-07-14 10:37:43.199370] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10e00c0 was disconnected and freed. reset controller. 00:28:58.349 [2024-07-14 10:37:43.199771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.199792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.349 [2024-07-14 10:37:43.199804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.349 [2024-07-14 10:37:43.199811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.199820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.199827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.199835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.199842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.199850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.199856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.199865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.199875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.199883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.199890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.199898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.199904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.199912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.199921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.199929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.199936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.199945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.199952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.199961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.199968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.199977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.199983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.199992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.199998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.200006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.200012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.200021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.200027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.200036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.200042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.200050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.200056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.200066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.200073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.200081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.200088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.200096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.200102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.200110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.200118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.200126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.200133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.200141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.200147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.200155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.200161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.200169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.200176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.200184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.200190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.200199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.200206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.200213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.200220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.200235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.214453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.214481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.214494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.214504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.214512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.214522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.214530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.214540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.214548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.214558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.214566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.214575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.214583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.214593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.214601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.214612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.214621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.214631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.214639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.214648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.214656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.214666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.214674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.214684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.214691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.214701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.350 [2024-07-14 10:37:43.214709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.350 [2024-07-14 10:37:43.214721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.214730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.214739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.214747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.214757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.214765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.214775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.214783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.214792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.214800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.214810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.214818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.214828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.214836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.214846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.214853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.214863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.214871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.214880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.214888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.214898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.214906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.214916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.214923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.214933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.214943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.214953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.214960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.214970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.214978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.214988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.214995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.215005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.215013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.215023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.215031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.215041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.215048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.215058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.215066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.215075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.215083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.215128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.351 [2024-07-14 10:37:43.215189] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10325a0 was disconnected and freed. reset controller. 00:28:58.351 [2024-07-14 10:37:43.215372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.215389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.215405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.215413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.215424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.215432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.215446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.215455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.215466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.215475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.215485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.215493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.215503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.215510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.215520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.215528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.215538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.215546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.215556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.215564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.215574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.215582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.215592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.215601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.215611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.215618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.215628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.215636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.351 [2024-07-14 10:37:43.215646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.351 [2024-07-14 10:37:43.215654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.215664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.215678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.215688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.215695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.215705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.215713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.215723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.215731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.215741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.215749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.215760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.215767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.215777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.215785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.215795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.215803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.215813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.215821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.215831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.215838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.215848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.215856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.215866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.215875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.215885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.215893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.215904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.215912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.215922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.215930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.215940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.215948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.215958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.215965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.215975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.215983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.215993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.216002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.216012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.216020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.216029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.216037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.216048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.216056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.216066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.216074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.216084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.216091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.216101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.216109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.216119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.216129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.216139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.216147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.216158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.216166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.216176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.216184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.216194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.216202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.216212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.216219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.216235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.216243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.216253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.216261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.216271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.216279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.216289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.216298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.216307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.216315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.216325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.216334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.216344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.216352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.216363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.216371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.216381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.216389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.216399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.216407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.216417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.352 [2024-07-14 10:37:43.216425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.352 [2024-07-14 10:37:43.216435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.216443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.216453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.216461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.216471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.216478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.216488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.216496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.216506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.216514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.216525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.216548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.216562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.216572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.216584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a61b0 is same with the state(5) to be set 00:28:58.353 [2024-07-14 10:37:43.216649] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11a61b0 was disconnected and freed. reset controller. 00:28:58.353 [2024-07-14 10:37:43.236804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d58c0 (9): Bad file descriptor 00:28:58.353 [2024-07-14 10:37:43.236860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec9d0 (9): Bad file descriptor 00:28:58.353 [2024-07-14 10:37:43.236872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2f610 (9): Bad file descriptor 00:28:58.353 [2024-07-14 10:37:43.236885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d5b10 (9): Bad file descriptor 00:28:58.353 [2024-07-14 10:37:43.236895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10463a0 (9): Bad file descriptor 00:28:58.353 [2024-07-14 10:37:43.236909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1067c30 (9): Bad file descriptor 00:28:58.353 [2024-07-14 10:37:43.236924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f9a20 (9): Bad file descriptor 00:28:58.353 [2024-07-14 10:37:43.236935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc23d70 (9): Bad file descriptor 00:28:58.353 [2024-07-14 10:37:43.236948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10376b0 (9): Bad file descriptor 00:28:58.353 [2024-07-14 10:37:43.236959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1203210 (9): Bad file descriptor 00:28:58.353 [2024-07-14 10:37:43.240012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.353 [2024-07-14 10:37:43.240502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.353 [2024-07-14 10:37:43.240511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.240986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.240993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.241001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.354 [2024-07-14 10:37:43.241007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.354 [2024-07-14 10:37:43.241014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a7670 is same with the state(5) to be set 00:28:58.354 [2024-07-14 10:37:43.241413] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11a7670 was disconnected and freed. reset controller. 00:28:58.354 [2024-07-14 10:37:43.241445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.354 [2024-07-14 10:37:43.242709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:58.354 [2024-07-14 10:37:43.242738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:58.354 [2024-07-14 10:37:43.242945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.354 [2024-07-14 10:37:43.242958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10376b0 with addr=10.0.0.2, port=4420 00:28:58.354 [2024-07-14 10:37:43.242966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10376b0 is same with the state(5) to be set 00:28:58.354 [2024-07-14 10:37:43.243932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:58.354 [2024-07-14 10:37:43.244176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.354 [2024-07-14 10:37:43.244192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f610 with addr=10.0.0.2, port=4420 00:28:58.354 [2024-07-14 10:37:43.244200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2f610 is same with the state(5) to be set 00:28:58.354 [2024-07-14 10:37:43.244346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.354 [2024-07-14 10:37:43.244357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f9a20 with addr=10.0.0.2, port=4420 00:28:58.354 [2024-07-14 10:37:43.244363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9a20 is same with the state(5) to be set 00:28:58.354 [2024-07-14 10:37:43.244374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10376b0 (9): Bad file descriptor 00:28:58.354 [2024-07-14 10:37:43.244435] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:58.354 [2024-07-14 10:37:43.244483] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:58.354 [2024-07-14 10:37:43.244526] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:58.354 [2024-07-14 10:37:43.244568] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:58.354 [2024-07-14 10:37:43.244620] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:58.355 [2024-07-14 10:37:43.244663] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:58.355 [2024-07-14 10:37:43.244981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.355 [2024-07-14 10:37:43.244995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec9d0 with addr=10.0.0.2, port=4420 00:28:58.355 [2024-07-14 10:37:43.245002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec9d0 is same with the state(5) to be set 00:28:58.355 [2024-07-14 10:37:43.245012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2f610 (9): Bad file descriptor 00:28:58.355 [2024-07-14 10:37:43.245021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f9a20 (9): Bad file descriptor 00:28:58.355 [2024-07-14 10:37:43.245029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.355 [2024-07-14 10:37:43.245036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.355 [2024-07-14 10:37:43.245044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.355 [2024-07-14 10:37:43.245360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.355 [2024-07-14 10:37:43.245375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec9d0 (9): Bad file descriptor 00:28:58.355 [2024-07-14 10:37:43.245383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:58.355 [2024-07-14 10:37:43.245389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:58.355 [2024-07-14 10:37:43.245396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:58.355 [2024-07-14 10:37:43.245407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:58.355 [2024-07-14 10:37:43.245413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:58.355 [2024-07-14 10:37:43.245419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:58.355 [2024-07-14 10:37:43.245462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.355 [2024-07-14 10:37:43.245469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.355 [2024-07-14 10:37:43.245475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:58.355 [2024-07-14 10:37:43.245481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:58.355 [2024-07-14 10:37:43.245487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:58.355 [2024-07-14 10:37:43.245528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.355 [2024-07-14 10:37:43.246892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.355 [2024-07-14 10:37:43.246906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.355 [2024-07-14 10:37:43.246918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.355 [2024-07-14 10:37:43.246925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.355 [2024-07-14 10:37:43.246937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.355 [2024-07-14 10:37:43.246943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.355 [2024-07-14 10:37:43.246951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.355 [2024-07-14 10:37:43.246958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.355 [2024-07-14 10:37:43.246966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.355 [2024-07-14 10:37:43.246972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.355 [2024-07-14 10:37:43.246980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.355 [2024-07-14 10:37:43.246987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.355 [2024-07-14 10:37:43.246995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.355 [2024-07-14 10:37:43.247001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.355 [2024-07-14 10:37:43.247009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.355 [2024-07-14 10:37:43.247015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.355 [2024-07-14 10:37:43.247023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.355 [2024-07-14 10:37:43.247030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.355 [2024-07-14 10:37:43.247038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.355 [2024-07-14 10:37:43.247044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.355 [2024-07-14 10:37:43.247052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.355 [2024-07-14 10:37:43.247059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.355 [2024-07-14 10:37:43.247067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.355 [2024-07-14 10:37:43.247073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.355 [2024-07-14 10:37:43.247081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.355 [2024-07-14 10:37:43.247087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.355 [2024-07-14 10:37:43.247095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.355 [2024-07-14 10:37:43.247102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.355 [2024-07-14 10:37:43.247109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.355 [2024-07-14 10:37:43.247117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.355 [2024-07-14 10:37:43.247125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.355 [2024-07-14 10:37:43.247132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.355 [2024-07-14 10:37:43.247139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.355 [2024-07-14 10:37:43.247146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.355 [2024-07-14 10:37:43.247153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.355 [2024-07-14 10:37:43.247160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.355 [2024-07-14 10:37:43.247168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.355 [2024-07-14 10:37:43.247174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.355 [2024-07-14 10:37:43.247183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.355 [2024-07-14 10:37:43.247189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.355 [2024-07-14 10:37:43.247197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.355 [2024-07-14 10:37:43.247204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.355 [2024-07-14 10:37:43.247212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.355 [2024-07-14 10:37:43.247218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.355 [2024-07-14 10:37:43.247231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.355 [2024-07-14 10:37:43.247238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.355 [2024-07-14 10:37:43.247246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.355 [2024-07-14 10:37:43.247253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.355 [2024-07-14 10:37:43.247261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.355 [2024-07-14 10:37:43.247267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.247834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.247841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e1340 is same with the state(5) to be set 00:28:58.356 [2024-07-14 10:37:43.248844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.248855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.248866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.248873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.356 [2024-07-14 10:37:43.248881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.356 [2024-07-14 10:37:43.248888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.248896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.248903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.248911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.248919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.248927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.248934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.248942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.248949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.248957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.248963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.248972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.248979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.248987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.248994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.357 [2024-07-14 10:37:43.249521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.357 [2024-07-14 10:37:43.249527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.249535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.249541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.249552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.249558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.249566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.249572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.249580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.249587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.249596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.249602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.249610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.249616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.249626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.249632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.249641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.249647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.249655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.249661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.249669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.249675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.249683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.249689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.249697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.249704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.249712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.249718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.249726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.249732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.249740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.249747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.249754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.249761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.249769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.249776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.249786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.249792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.249799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e27d0 is same with the state(5) to be set 00:28:58.358 [2024-07-14 10:37:43.250811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.250822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.250833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.250840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.250848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.250855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.250863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.250870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.250878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.250885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.250893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.250900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.250908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.250914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.250923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.250930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.250938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.250945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.250953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.250960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.250968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.250975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.250986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.250992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.251000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.251006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.251015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.251022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.251030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.251037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.251045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.251053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.251061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.251067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.251075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.251082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.251091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.251098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.251106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.251113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.251122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.251129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.251137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.251143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.251151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.251158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.358 [2024-07-14 10:37:43.251166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.358 [2024-07-14 10:37:43.251174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.359 [2024-07-14 10:37:43.251712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.359 [2024-07-14 10:37:43.251718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.251726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.251733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.251742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.251749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.251757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.251763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.251771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.251778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.251784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e3cf0 is same with the state(5) to be set 00:28:58.360 [2024-07-14 10:37:43.252792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.252802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.252812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.252819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.252828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.252837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.252845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.252851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.252860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.252867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.252875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.252881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.252890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.252896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.252905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.252911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.252919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.252926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.252936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.252943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.252951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.252957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.252965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.252972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.252979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.252986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.252994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.253000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.253009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.253015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.253023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.253029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.253037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.253044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.253051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.253058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.253066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.253073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.253081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.253087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.253095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.253102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.253110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.253117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.253125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.253132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.253140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.253147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.253155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.253161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.253169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.253175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.253183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.253190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.253197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.253204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.253212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.253219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.253230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.253236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.253245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.253251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.253259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.253266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.253276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.253283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.253291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.253298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.253308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.253316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.253324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.253330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.253338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.253345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.360 [2024-07-14 10:37:43.253353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.360 [2024-07-14 10:37:43.253360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.259564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.259578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.259588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.259594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.259603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.259610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.259618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.259624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.259633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.259639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.259647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.259654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.259662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.259669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.259677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.259683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.259692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.259701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.259710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.259716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.259725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.259732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.259740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.259747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.259755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.259762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.259771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.259777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.259785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.259791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.259799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.259806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.259814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.259821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.259829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.259836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.259844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.259850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.259859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.259866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.259874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.259880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.259890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.259897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.259905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.259912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.259920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.259926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.259934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.259941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.259949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.259956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.259963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031130 is same with the state(5) to be set 00:28:58.361 [2024-07-14 10:37:43.260982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.260994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.261004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.261010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.261019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.261026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.261034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.261041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.261049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.261056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.261064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.261070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.261078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.261085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.261096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.261102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.261111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.261117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.261125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.261131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.261140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.261146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.261154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.261161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.261168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.261175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.261183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.261189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.261197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.261204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.361 [2024-07-14 10:37:43.261211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.361 [2024-07-14 10:37:43.261218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.362 [2024-07-14 10:37:43.261839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.362 [2024-07-14 10:37:43.261848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.261854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.261862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.261868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.261876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.261883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.261890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.261897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.261905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.261911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.261919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.261925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.261932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033a10 is same with the state(5) to be set 00:28:58.363 [2024-07-14 10:37:43.262927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.262938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.262948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.262955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.262963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.262970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.262978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.262985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.262993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.262999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.263007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.263016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.263025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.263031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.263040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.263046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.263055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.263061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.263069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.263075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.263084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.263090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.263098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.263105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.263113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.263119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.263127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.263134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.263142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.263148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.263156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.263163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.263170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.263177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.263185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.263192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.263201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.263207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.263216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.263222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.263234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.263241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.263249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.263256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.263264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.263270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.263278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.263284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.263292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.263299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.263307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.263314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.263322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.263328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.263337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.263343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.263351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.263357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.263365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.263372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.363 [2024-07-14 10:37:43.263380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.363 [2024-07-14 10:37:43.263390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.364 [2024-07-14 10:37:43.263884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.364 [2024-07-14 10:37:43.263893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4f00 is same with the state(5) to be set 00:28:58.364 [2024-07-14 10:37:43.265121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:58.364 [2024-07-14 10:37:43.265139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:58.364 [2024-07-14 10:37:43.265148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:58.364 [2024-07-14 10:37:43.265157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:58.364 [2024-07-14 10:37:43.265241] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:58.364 [2024-07-14 10:37:43.265253] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:58.364 [2024-07-14 10:37:43.265319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:58.364 task offset: 27648 on job bdev=Nvme1n1 fails 00:28:58.364 00:28:58.364 Latency(us) 00:28:58.364 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.364 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:58.364 Job: Nvme1n1 ended in about 0.92 seconds with error 00:28:58.364 Verification LBA range: start 0x0 length 0x400 00:28:58.364 Nvme1n1 : 0.92 208.93 13.06 69.64 0.00 227484.94 17324.30 217921.45 00:28:58.364 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:58.364 Job: Nvme2n1 ended in about 0.93 seconds with error 00:28:58.364 Verification LBA range: start 0x0 length 0x400 00:28:58.364 Nvme2n1 : 0.93 210.79 13.17 68.83 0.00 222776.65 6553.60 217921.45 00:28:58.364 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:58.364 Job: Nvme3n1 ended in about 0.93 seconds with error 00:28:58.364 Verification LBA range: start 0x0 length 0x400 00:28:58.364 Nvme3n1 : 0.93 206.05 12.88 68.68 0.00 222707.31 14702.86 237069.36 00:28:58.364 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:58.364 Job: Nvme4n1 ended in about 0.93 seconds with error 00:28:58.364 Verification LBA range: start 0x0 length 0x400 00:28:58.364 Nvme4n1 : 0.93 210.97 13.19 68.54 0.00 215136.36 13449.13 216097.84 00:28:58.364 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:58.364 Job: Nvme5n1 ended in about 0.94 seconds with error 00:28:58.364 Verification LBA range: start 0x0 length 0x400 00:28:58.365 Nvme5n1 : 0.94 208.08 13.01 67.95 0.00 214112.25 16868.40 217921.45 00:28:58.365 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:58.365 Job: Nvme6n1 ended in about 0.92 seconds with error 00:28:58.365 Verification LBA range: start 0x0 length 0x400 00:28:58.365 Nvme6n1 : 0.92 208.67 13.04 69.56 0.00 208028.27 33736.79 201508.95 00:28:58.365 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:58.365 Job: Nvme7n1 ended in about 0.94 seconds with error 00:28:58.365 Verification LBA range: start 0x0 length 0x400 00:28:58.365 Nvme7n1 : 0.94 203.41 12.71 67.80 0.00 210062.25 14816.83 235245.75 00:28:58.365 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:58.365 Job: Nvme8n1 ended in about 0.95 seconds with error 00:28:58.365 Verification LBA range: start 0x0 length 0x400 00:28:58.365 Nvme8n1 : 0.95 208.28 13.02 67.66 0.00 202608.10 14930.81 229774.91 00:28:58.365 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:58.365 Job: Nvme9n1 ended in about 0.92 seconds with error 00:28:58.365 Verification LBA range: start 0x0 length 0x400 00:28:58.365 Nvme9n1 : 0.92 208.45 13.03 69.48 0.00 196426.80 19147.91 217009.64 00:28:58.365 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:58.365 Job: Nvme10n1 ended in about 0.92 seconds with error 00:28:58.365 Verification LBA range: start 0x0 length 0x400 00:28:58.365 Nvme10n1 : 0.92 207.84 12.99 69.28 0.00 193213.44 5385.35 233422.14 00:28:58.365 =================================================================================================================== 00:28:58.365 Total : 2081.48 130.09 687.43 0.00 211268.70 5385.35 237069.36 00:28:58.365 [2024-07-14 10:37:43.286820] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:58.365 [2024-07-14 10:37:43.286859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:58.365 [2024-07-14 10:37:43.287181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.365 [2024-07-14 10:37:43.287198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1203210 with addr=10.0.0.2, port=4420 00:28:58.365 [2024-07-14 10:37:43.287209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1203210 is same with the state(5) to be set 00:28:58.365 [2024-07-14 10:37:43.287447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.365 [2024-07-14 10:37:43.287458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc23d70 with addr=10.0.0.2, port=4420 00:28:58.365 [2024-07-14 10:37:43.287469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23d70 is same with the state(5) to be set 00:28:58.365 [2024-07-14 10:37:43.287611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.365 [2024-07-14 10:37:43.287621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10463a0 with addr=10.0.0.2, port=4420 00:28:58.365 [2024-07-14 10:37:43.287627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10463a0 is same with the state(5) to be set 00:28:58.365 [2024-07-14 10:37:43.287830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.365 [2024-07-14 10:37:43.287839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1067c30 with addr=10.0.0.2, port=4420 00:28:58.365 [2024-07-14 10:37:43.287846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1067c30 is same with the state(5) to be set 00:28:58.365 [2024-07-14 10:37:43.289182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.365 [2024-07-14 10:37:43.289197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:58.365 [2024-07-14 10:37:43.289205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:58.365 [2024-07-14 10:37:43.289214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:58.365 [2024-07-14 10:37:43.289491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.365 [2024-07-14 10:37:43.289504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d5b10 with addr=10.0.0.2, port=4420 00:28:58.365 [2024-07-14 10:37:43.289511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d5b10 is same with the state(5) to be set 00:28:58.365 [2024-07-14 10:37:43.289600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.365 [2024-07-14 10:37:43.289610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d58c0 with addr=10.0.0.2, port=4420 00:28:58.365 [2024-07-14 10:37:43.289616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d58c0 is same with the state(5) to be set 00:28:58.365 [2024-07-14 10:37:43.289628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1203210 (9): Bad file descriptor 00:28:58.365 [2024-07-14 10:37:43.289639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc23d70 (9): Bad file descriptor 00:28:58.365 [2024-07-14 10:37:43.289647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10463a0 (9): Bad file descriptor 00:28:58.365 [2024-07-14 10:37:43.289656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1067c30 (9): Bad file descriptor 00:28:58.365 [2024-07-14 10:37:43.289689] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:58.365 [2024-07-14 10:37:43.289700] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:58.365 [2024-07-14 10:37:43.289711] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:58.365 [2024-07-14 10:37:43.289721] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:58.365 [2024-07-14 10:37:43.289989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.365 [2024-07-14 10:37:43.290001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10376b0 with addr=10.0.0.2, port=4420 00:28:58.365 [2024-07-14 10:37:43.290008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10376b0 is same with the state(5) to be set 00:28:58.365 [2024-07-14 10:37:43.290235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.365 [2024-07-14 10:37:43.290245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f9a20 with addr=10.0.0.2, port=4420 00:28:58.365 [2024-07-14 10:37:43.290255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9a20 is same with the state(5) to be set 00:28:58.365 [2024-07-14 10:37:43.290480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.365 [2024-07-14 10:37:43.290490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f610 with addr=10.0.0.2, port=4420 00:28:58.365 [2024-07-14 10:37:43.290497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2f610 is same with the state(5) to be set 00:28:58.365 [2024-07-14 10:37:43.290720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.365 [2024-07-14 10:37:43.290729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec9d0 with addr=10.0.0.2, port=4420 00:28:58.365 [2024-07-14 10:37:43.290736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec9d0 is same with the state(5) to be set 00:28:58.365 [2024-07-14 10:37:43.290744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d5b10 (9): Bad file descriptor 00:28:58.365 [2024-07-14 10:37:43.290753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d58c0 (9): Bad file descriptor 00:28:58.365 [2024-07-14 10:37:43.290761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:58.365 [2024-07-14 10:37:43.290767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:58.365 [2024-07-14 10:37:43.290775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:58.365 [2024-07-14 10:37:43.290786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:58.365 [2024-07-14 10:37:43.290792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:58.365 [2024-07-14 10:37:43.290798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:58.365 [2024-07-14 10:37:43.290807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:58.365 [2024-07-14 10:37:43.290813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:58.365 [2024-07-14 10:37:43.290819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:58.365 [2024-07-14 10:37:43.290829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:58.365 [2024-07-14 10:37:43.290835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:58.365 [2024-07-14 10:37:43.290841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:58.365 [2024-07-14 10:37:43.290909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.365 [2024-07-14 10:37:43.290917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.365 [2024-07-14 10:37:43.290923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.365 [2024-07-14 10:37:43.290928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.365 [2024-07-14 10:37:43.290934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10376b0 (9): Bad file descriptor 00:28:58.365 [2024-07-14 10:37:43.290943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f9a20 (9): Bad file descriptor 00:28:58.365 [2024-07-14 10:37:43.290951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2f610 (9): Bad file descriptor 00:28:58.365 [2024-07-14 10:37:43.290959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec9d0 (9): Bad file descriptor 00:28:58.365 [2024-07-14 10:37:43.290966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:58.365 [2024-07-14 10:37:43.290974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:58.365 [2024-07-14 10:37:43.290980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:58.365 [2024-07-14 10:37:43.290988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:58.365 [2024-07-14 10:37:43.290994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:58.365 [2024-07-14 10:37:43.291000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:58.365 [2024-07-14 10:37:43.291022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.365 [2024-07-14 10:37:43.291029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.365 [2024-07-14 10:37:43.291034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.365 [2024-07-14 10:37:43.291040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.365 [2024-07-14 10:37:43.291046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.365 [2024-07-14 10:37:43.291054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:58.365 [2024-07-14 10:37:43.291060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:58.365 [2024-07-14 10:37:43.291065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:58.365 [2024-07-14 10:37:43.291074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:58.365 [2024-07-14 10:37:43.291080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:58.365 [2024-07-14 10:37:43.291085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:58.366 [2024-07-14 10:37:43.291093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:58.366 [2024-07-14 10:37:43.291099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:58.366 [2024-07-14 10:37:43.291105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:58.366 [2024-07-14 10:37:43.291129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.366 [2024-07-14 10:37:43.291136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.366 [2024-07-14 10:37:43.291142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.366 [2024-07-14 10:37:43.291147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.932 10:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:28:58.932 10:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:28:59.869 10:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2522229 00:28:59.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2522229) - No such process 00:28:59.869 10:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:28:59.869 10:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:28:59.869 10:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:59.869 10:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:59.869 10:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:59.869 10:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:59.869 10:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:59.869 10:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:28:59.869 10:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:59.869 10:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:28:59.869 10:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:59.869 10:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:59.869 rmmod nvme_tcp 00:28:59.869 rmmod nvme_fabrics 00:28:59.869 rmmod nvme_keyring 00:28:59.869 10:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:59.869 10:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:28:59.869 10:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:28:59.869 10:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:59.869 10:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:59.869 10:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:59.869 10:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:59.869 10:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:59.869 10:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:59.869 10:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.869 10:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:59.869 10:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.772 10:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:01.772 00:29:01.772 real 0m7.169s 00:29:01.772 user 0m16.781s 00:29:01.772 sys 0m1.257s 00:29:01.772 10:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:01.772 10:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:01.772 ************************************ 00:29:01.772 END TEST nvmf_shutdown_tc3 00:29:01.772 ************************************ 00:29:02.031 10:37:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:29:02.031 10:37:46 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:29:02.031 00:29:02.031 real 0m31.231s 00:29:02.031 user 1m18.019s 00:29:02.031 sys 0m8.454s 00:29:02.031 10:37:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:02.031 10:37:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:02.031 ************************************ 00:29:02.031 END TEST nvmf_shutdown 00:29:02.031 ************************************ 00:29:02.031 10:37:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:02.031 10:37:46 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:29:02.031 10:37:46 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:02.031 10:37:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:02.031 10:37:46 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:29:02.031 10:37:46 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:02.031 10:37:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:02.031 10:37:46 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:29:02.031 10:37:46 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:02.031 10:37:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:02.031 10:37:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:02.031 10:37:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:02.031 ************************************ 00:29:02.031 START TEST nvmf_multicontroller 00:29:02.031 ************************************ 00:29:02.031 10:37:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:02.031 * Looking for test storage... 00:29:02.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:02.031 10:37:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:02.031 10:37:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:02.031 10:37:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:02.031 10:37:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:02.031 10:37:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:02.031 10:37:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:02.031 10:37:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:02.031 10:37:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:02.031 10:37:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:02.031 10:37:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:02.031 10:37:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:02.031 10:37:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:02.031 10:37:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:02.031 10:37:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:02.031 10:37:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:02.031 10:37:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:02.031 10:37:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:02.031 10:37:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:02.031 10:37:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:02.031 10:37:47 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:02.031 10:37:47 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:02.031 10:37:47 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:02.031 10:37:47 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.031 10:37:47 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.031 10:37:47 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.031 10:37:47 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:02.031 10:37:47 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.031 10:37:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:29:02.031 10:37:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:02.031 10:37:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:02.031 10:37:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:02.031 10:37:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:02.031 10:37:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:02.031 10:37:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:02.031 10:37:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:02.290 10:37:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:02.290 10:37:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:02.290 10:37:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:02.290 10:37:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:02.290 10:37:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:02.290 10:37:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:02.290 10:37:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:02.290 10:37:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:02.290 10:37:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:02.290 10:37:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:02.290 10:37:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:02.290 10:37:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:02.290 10:37:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:02.290 10:37:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.290 10:37:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:02.290 10:37:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.290 10:37:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:02.290 10:37:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:02.290 10:37:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:29:02.290 10:37:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:07.597 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:07.597 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:29:07.597 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:07.597 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:07.597 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:07.597 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:07.597 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:07.597 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:29:07.597 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:07.597 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:07.598 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:07.598 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:07.598 Found net devices under 0000:86:00.0: cvl_0_0 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:07.598 Found net devices under 0000:86:00.1: cvl_0_1 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:07.598 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:07.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:07.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:29:07.858 00:29:07.858 --- 10.0.0.2 ping statistics --- 00:29:07.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.858 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:07.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:07.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:29:07.858 00:29:07.858 --- 10.0.0.1 ping statistics --- 00:29:07.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.858 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2526277 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2526277 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2526277 ']' 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:07.858 10:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:07.858 [2024-07-14 10:37:52.824557] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:29:07.858 [2024-07-14 10:37:52.824600] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:08.118 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.118 [2024-07-14 10:37:52.882371] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:08.118 [2024-07-14 10:37:52.923317] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:08.118 [2024-07-14 10:37:52.923359] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:08.118 [2024-07-14 10:37:52.923367] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:08.118 [2024-07-14 10:37:52.923374] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:08.118 [2024-07-14 10:37:52.923379] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:08.118 [2024-07-14 10:37:52.923439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:08.118 [2024-07-14 10:37:52.923545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.118 [2024-07-14 10:37:52.923545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:08.118 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:08.118 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:29:08.118 10:37:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:08.118 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:08.118 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.118 10:37:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:08.118 10:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:08.118 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.118 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.118 [2024-07-14 10:37:53.061072] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:08.118 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.118 10:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:08.118 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.118 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.377 Malloc0 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.377 [2024-07-14 10:37:53.125299] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.377 [2024-07-14 10:37:53.133237] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.377 Malloc1 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2526434 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2526434 /var/tmp/bdevperf.sock 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2526434 ']' 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:08.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:08.377 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.636 NVMe0n1 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.636 1 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.636 request: 00:29:08.636 { 00:29:08.636 "name": "NVMe0", 00:29:08.636 "trtype": "tcp", 00:29:08.636 "traddr": "10.0.0.2", 00:29:08.636 "adrfam": "ipv4", 00:29:08.636 "trsvcid": "4420", 00:29:08.636 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:08.636 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:08.636 "hostaddr": "10.0.0.2", 00:29:08.636 "hostsvcid": "60000", 00:29:08.636 "prchk_reftag": false, 00:29:08.636 "prchk_guard": false, 00:29:08.636 "hdgst": false, 00:29:08.636 "ddgst": false, 00:29:08.636 "method": "bdev_nvme_attach_controller", 00:29:08.636 "req_id": 1 00:29:08.636 } 00:29:08.636 Got JSON-RPC error response 00:29:08.636 response: 00:29:08.636 { 00:29:08.636 "code": -114, 00:29:08.636 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:08.636 } 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.636 request: 00:29:08.636 { 00:29:08.636 "name": "NVMe0", 00:29:08.636 "trtype": "tcp", 00:29:08.636 "traddr": "10.0.0.2", 00:29:08.636 "adrfam": "ipv4", 00:29:08.636 "trsvcid": "4420", 00:29:08.636 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:08.636 "hostaddr": "10.0.0.2", 00:29:08.636 "hostsvcid": "60000", 00:29:08.636 "prchk_reftag": false, 00:29:08.636 "prchk_guard": false, 00:29:08.636 "hdgst": false, 00:29:08.636 "ddgst": false, 00:29:08.636 "method": "bdev_nvme_attach_controller", 00:29:08.636 "req_id": 1 00:29:08.636 } 00:29:08.636 Got JSON-RPC error response 00:29:08.636 response: 00:29:08.636 { 00:29:08.636 "code": -114, 00:29:08.636 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:08.636 } 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.636 request: 00:29:08.636 { 00:29:08.636 "name": "NVMe0", 00:29:08.636 "trtype": "tcp", 00:29:08.636 "traddr": "10.0.0.2", 00:29:08.636 "adrfam": "ipv4", 00:29:08.636 "trsvcid": "4420", 00:29:08.636 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:08.636 "hostaddr": "10.0.0.2", 00:29:08.636 "hostsvcid": "60000", 00:29:08.636 "prchk_reftag": false, 00:29:08.636 "prchk_guard": false, 00:29:08.636 "hdgst": false, 00:29:08.636 "ddgst": false, 00:29:08.636 "multipath": "disable", 00:29:08.636 "method": "bdev_nvme_attach_controller", 00:29:08.636 "req_id": 1 00:29:08.636 } 00:29:08.636 Got JSON-RPC error response 00:29:08.636 response: 00:29:08.636 { 00:29:08.636 "code": -114, 00:29:08.636 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:29:08.636 } 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.636 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.894 request: 00:29:08.894 { 00:29:08.894 "name": "NVMe0", 00:29:08.894 "trtype": "tcp", 00:29:08.894 "traddr": "10.0.0.2", 00:29:08.894 "adrfam": "ipv4", 00:29:08.894 "trsvcid": "4420", 00:29:08.894 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:08.894 "hostaddr": "10.0.0.2", 00:29:08.894 "hostsvcid": "60000", 00:29:08.894 "prchk_reftag": false, 00:29:08.894 "prchk_guard": false, 00:29:08.894 "hdgst": false, 00:29:08.894 "ddgst": false, 00:29:08.894 "multipath": "failover", 00:29:08.894 "method": "bdev_nvme_attach_controller", 00:29:08.894 "req_id": 1 00:29:08.894 } 00:29:08.894 Got JSON-RPC error response 00:29:08.894 response: 00:29:08.894 { 00:29:08.894 "code": -114, 00:29:08.894 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:08.894 } 00:29:08.894 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:08.894 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:29:08.894 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:08.894 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:08.894 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:08.894 10:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:08.894 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.894 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.894 00:29:08.894 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.894 10:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:08.894 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.894 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.894 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.894 10:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:29:08.894 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.894 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:09.152 00:29:09.152 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.152 10:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:09.152 10:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:09.152 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.152 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:09.152 10:37:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.152 10:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:09.152 10:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:10.087 0 00:29:10.087 10:37:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:10.087 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.087 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:10.346 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.346 10:37:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2526434 00:29:10.346 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2526434 ']' 00:29:10.346 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2526434 00:29:10.346 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:29:10.346 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:10.346 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2526434 00:29:10.346 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:10.346 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:10.346 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2526434' 00:29:10.346 killing process with pid 2526434 00:29:10.346 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2526434 00:29:10.346 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2526434 00:29:10.346 10:37:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:10.346 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.346 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:10.346 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.346 10:37:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:10.346 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.346 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:10.346 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.346 10:37:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:29:10.346 10:37:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:10.346 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:29:10.346 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:10.346 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:29:10.346 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:29:10.346 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:10.346 [2024-07-14 10:37:53.234498] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:29:10.346 [2024-07-14 10:37:53.234550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2526434 ] 00:29:10.346 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.346 [2024-07-14 10:37:53.288184] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.346 [2024-07-14 10:37:53.328284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.346 [2024-07-14 10:37:53.931824] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name d9860cf6-51c8-4ff6-908f-6119f6d7a8af already exists 00:29:10.346 [2024-07-14 10:37:53.931855] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:d9860cf6-51c8-4ff6-908f-6119f6d7a8af alias for bdev NVMe1n1 00:29:10.346 [2024-07-14 10:37:53.931864] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:10.346 Running I/O for 1 seconds... 00:29:10.346 00:29:10.346 Latency(us) 00:29:10.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.346 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:10.346 NVMe0n1 : 1.00 24786.12 96.82 0.00 0.00 5156.98 3376.53 12594.31 00:29:10.346 =================================================================================================================== 00:29:10.346 Total : 24786.12 96.82 0.00 0.00 5156.98 3376.53 12594.31 00:29:10.346 Received shutdown signal, test time was about 1.000000 seconds 00:29:10.346 00:29:10.346 Latency(us) 00:29:10.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.346 =================================================================================================================== 00:29:10.346 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:10.346 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:10.346 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:10.605 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:29:10.605 10:37:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:29:10.605 10:37:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:10.605 10:37:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:29:10.605 10:37:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:10.605 10:37:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:29:10.605 10:37:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:10.605 10:37:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:10.605 rmmod nvme_tcp 00:29:10.605 rmmod nvme_fabrics 00:29:10.605 rmmod nvme_keyring 00:29:10.605 10:37:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:10.605 10:37:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:29:10.605 10:37:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:29:10.605 10:37:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2526277 ']' 00:29:10.605 10:37:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2526277 00:29:10.605 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2526277 ']' 00:29:10.605 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2526277 00:29:10.605 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:29:10.605 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:10.605 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2526277 00:29:10.605 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:10.605 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:10.605 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2526277' 00:29:10.605 killing process with pid 2526277 00:29:10.605 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2526277 00:29:10.605 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2526277 00:29:10.864 10:37:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:10.864 10:37:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:10.864 10:37:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:10.864 10:37:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:10.864 10:37:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:10.864 10:37:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.864 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:10.864 10:37:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.802 10:37:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:12.802 00:29:12.802 real 0m10.806s 00:29:12.802 user 0m11.816s 00:29:12.802 sys 0m4.994s 00:29:12.802 10:37:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:12.802 10:37:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:12.802 ************************************ 00:29:12.802 END TEST nvmf_multicontroller 00:29:12.802 ************************************ 00:29:12.802 10:37:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:12.802 10:37:57 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:12.802 10:37:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:12.802 10:37:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:12.802 10:37:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:12.802 ************************************ 00:29:12.802 START TEST nvmf_aer 00:29:12.803 ************************************ 00:29:12.803 10:37:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:13.062 * Looking for test storage... 00:29:13.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:29:13.062 10:37:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:19.635 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:19.635 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:29:19.635 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:19.635 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:19.635 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:19.635 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:19.636 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:19.636 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:19.636 Found net devices under 0000:86:00.0: cvl_0_0 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:19.636 Found net devices under 0000:86:00.1: cvl_0_1 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:19.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:19.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:29:19.636 00:29:19.636 --- 10.0.0.2 ping statistics --- 00:29:19.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.636 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:19.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:19.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:29:19.636 00:29:19.636 --- 10.0.0.1 ping statistics --- 00:29:19.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.636 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2530275 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2530275 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 2530275 ']' 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:19.636 10:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:19.636 [2024-07-14 10:38:03.708393] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:29:19.636 [2024-07-14 10:38:03.708436] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:19.636 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.636 [2024-07-14 10:38:03.776804] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:19.636 [2024-07-14 10:38:03.818572] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:19.636 [2024-07-14 10:38:03.818610] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:19.636 [2024-07-14 10:38:03.818617] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:19.636 [2024-07-14 10:38:03.818624] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:19.636 [2024-07-14 10:38:03.818630] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:19.636 [2024-07-14 10:38:03.818677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.636 [2024-07-14 10:38:03.818786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:19.637 [2024-07-14 10:38:03.818821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.637 [2024-07-14 10:38:03.818822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:19.637 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:19.637 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:29:19.637 10:38:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:19.637 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:19.637 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:19.637 10:38:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:19.637 10:38:04 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:19.637 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.637 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:19.637 [2024-07-14 10:38:04.559348] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.637 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.637 10:38:04 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:19.637 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.637 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:19.637 Malloc0 00:29:19.637 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.637 10:38:04 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:19.637 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.637 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:19.637 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.637 10:38:04 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:19.637 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.637 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:19.637 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.637 10:38:04 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:19.637 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.637 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:19.637 [2024-07-14 10:38:04.610971] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.637 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.637 10:38:04 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:19.896 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.896 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:19.896 [ 00:29:19.896 { 00:29:19.896 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:19.896 "subtype": "Discovery", 00:29:19.896 "listen_addresses": [], 00:29:19.896 "allow_any_host": true, 00:29:19.896 "hosts": [] 00:29:19.896 }, 00:29:19.896 { 00:29:19.896 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:19.896 "subtype": "NVMe", 00:29:19.896 "listen_addresses": [ 00:29:19.896 { 00:29:19.896 "trtype": "TCP", 00:29:19.896 "adrfam": "IPv4", 00:29:19.896 "traddr": "10.0.0.2", 00:29:19.896 "trsvcid": "4420" 00:29:19.896 } 00:29:19.896 ], 00:29:19.896 "allow_any_host": true, 00:29:19.896 "hosts": [], 00:29:19.896 "serial_number": "SPDK00000000000001", 00:29:19.896 "model_number": "SPDK bdev Controller", 00:29:19.896 "max_namespaces": 2, 00:29:19.896 "min_cntlid": 1, 00:29:19.896 "max_cntlid": 65519, 00:29:19.896 "namespaces": [ 00:29:19.896 { 00:29:19.896 "nsid": 1, 00:29:19.896 "bdev_name": "Malloc0", 00:29:19.897 "name": "Malloc0", 00:29:19.897 "nguid": "519F627A6B1B4AF0BEB3092D5DCBD7A0", 00:29:19.897 "uuid": "519f627a-6b1b-4af0-beb3-092d5dcbd7a0" 00:29:19.897 } 00:29:19.897 ] 00:29:19.897 } 00:29:19.897 ] 00:29:19.897 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.897 10:38:04 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:19.897 10:38:04 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:19.897 10:38:04 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2530356 00:29:19.897 10:38:04 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:19.897 10:38:04 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:19.897 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:29:19.897 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:19.897 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:29:19.897 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:29:19.897 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:19.897 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.897 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:19.897 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:29:19.897 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:29:19.897 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:19.897 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:19.897 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:19.897 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:29:19.897 10:38:04 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:19.897 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.897 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:19.897 Malloc1 00:29:19.897 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.897 10:38:04 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:19.897 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.897 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:20.156 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.156 10:38:04 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:20.156 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.156 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:20.156 Asynchronous Event Request test 00:29:20.156 Attaching to 10.0.0.2 00:29:20.156 Attached to 10.0.0.2 00:29:20.156 Registering asynchronous event callbacks... 00:29:20.156 Starting namespace attribute notice tests for all controllers... 00:29:20.156 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:20.156 aer_cb - Changed Namespace 00:29:20.156 Cleaning up... 00:29:20.156 [ 00:29:20.156 { 00:29:20.156 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:20.156 "subtype": "Discovery", 00:29:20.156 "listen_addresses": [], 00:29:20.156 "allow_any_host": true, 00:29:20.156 "hosts": [] 00:29:20.156 }, 00:29:20.156 { 00:29:20.156 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:20.156 "subtype": "NVMe", 00:29:20.156 "listen_addresses": [ 00:29:20.156 { 00:29:20.156 "trtype": "TCP", 00:29:20.156 "adrfam": "IPv4", 00:29:20.156 "traddr": "10.0.0.2", 00:29:20.156 "trsvcid": "4420" 00:29:20.156 } 00:29:20.156 ], 00:29:20.156 "allow_any_host": true, 00:29:20.156 "hosts": [], 00:29:20.156 "serial_number": "SPDK00000000000001", 00:29:20.156 "model_number": "SPDK bdev Controller", 00:29:20.156 "max_namespaces": 2, 00:29:20.156 "min_cntlid": 1, 00:29:20.156 "max_cntlid": 65519, 00:29:20.156 "namespaces": [ 00:29:20.156 { 00:29:20.156 "nsid": 1, 00:29:20.156 "bdev_name": "Malloc0", 00:29:20.156 "name": "Malloc0", 00:29:20.156 "nguid": "519F627A6B1B4AF0BEB3092D5DCBD7A0", 00:29:20.156 "uuid": "519f627a-6b1b-4af0-beb3-092d5dcbd7a0" 00:29:20.156 }, 00:29:20.156 { 00:29:20.156 "nsid": 2, 00:29:20.156 "bdev_name": "Malloc1", 00:29:20.156 "name": "Malloc1", 00:29:20.156 "nguid": "0D5C480DA3E249A083B2BE51CE9A8E31", 00:29:20.156 "uuid": "0d5c480d-a3e2-49a0-83b2-be51ce9a8e31" 00:29:20.156 } 00:29:20.156 ] 00:29:20.156 } 00:29:20.156 ] 00:29:20.156 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.156 10:38:04 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2530356 00:29:20.156 10:38:04 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:20.156 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.156 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:20.156 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.156 10:38:04 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:20.156 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.156 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:20.156 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.156 10:38:04 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:20.156 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.156 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:20.156 10:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.156 10:38:04 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:20.156 10:38:04 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:20.156 10:38:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:20.156 10:38:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:29:20.156 10:38:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:20.156 10:38:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:29:20.156 10:38:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:20.156 10:38:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:20.157 rmmod nvme_tcp 00:29:20.157 rmmod nvme_fabrics 00:29:20.157 rmmod nvme_keyring 00:29:20.157 10:38:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:20.157 10:38:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:29:20.157 10:38:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:29:20.157 10:38:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2530275 ']' 00:29:20.157 10:38:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2530275 00:29:20.157 10:38:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 2530275 ']' 00:29:20.157 10:38:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 2530275 00:29:20.157 10:38:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:29:20.157 10:38:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:20.157 10:38:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2530275 00:29:20.157 10:38:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:20.157 10:38:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:20.157 10:38:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2530275' 00:29:20.157 killing process with pid 2530275 00:29:20.157 10:38:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 2530275 00:29:20.157 10:38:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 2530275 00:29:20.416 10:38:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:20.416 10:38:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:20.416 10:38:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:20.416 10:38:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:20.416 10:38:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:20.416 10:38:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.416 10:38:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:20.416 10:38:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.953 10:38:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:22.953 00:29:22.953 real 0m9.540s 00:29:22.953 user 0m7.324s 00:29:22.953 sys 0m4.771s 00:29:22.953 10:38:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:22.953 10:38:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:22.953 ************************************ 00:29:22.953 END TEST nvmf_aer 00:29:22.953 ************************************ 00:29:22.953 10:38:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:22.953 10:38:07 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:22.953 10:38:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:22.953 10:38:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:22.953 10:38:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:22.953 ************************************ 00:29:22.953 START TEST nvmf_async_init 00:29:22.953 ************************************ 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:22.953 * Looking for test storage... 00:29:22.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=fe9558fb51d54a34920c80a2b491df17 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:29:22.953 10:38:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:28.228 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:28.228 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:28.228 Found net devices under 0000:86:00.0: cvl_0_0 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:28.228 Found net devices under 0000:86:00.1: cvl_0_1 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:28.228 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:28.487 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:28.487 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:28.487 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:28.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:28.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:29:28.487 00:29:28.487 --- 10.0.0.2 ping statistics --- 00:29:28.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.487 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:29:28.487 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:28.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:28.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:29:28.487 00:29:28.487 --- 10.0.0.1 ping statistics --- 00:29:28.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.488 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:29:28.488 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:28.488 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:29:28.488 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:28.488 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:28.488 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:28.488 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:28.488 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:28.488 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:28.488 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:28.488 10:38:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:28.488 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:28.488 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:28.488 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:28.488 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2533861 00:29:28.488 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2533861 00:29:28.488 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:28.488 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 2533861 ']' 00:29:28.488 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.488 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:28.488 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:28.488 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:28.488 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:28.488 [2024-07-14 10:38:13.359414] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:29:28.488 [2024-07-14 10:38:13.359456] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:28.488 EAL: No free 2048 kB hugepages reported on node 1 00:29:28.488 [2024-07-14 10:38:13.430627] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.746 [2024-07-14 10:38:13.470418] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:28.746 [2024-07-14 10:38:13.470458] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:28.746 [2024-07-14 10:38:13.470465] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:28.746 [2024-07-14 10:38:13.470470] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:28.746 [2024-07-14 10:38:13.470475] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:28.746 [2024-07-14 10:38:13.470498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:28.746 [2024-07-14 10:38:13.598430] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:28.746 null0 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g fe9558fb51d54a34920c80a2b491df17 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:28.746 [2024-07-14 10:38:13.638636] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.746 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:29.005 nvme0n1 00:29:29.005 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.005 10:38:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:29.005 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.005 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:29.005 [ 00:29:29.005 { 00:29:29.005 "name": "nvme0n1", 00:29:29.005 "aliases": [ 00:29:29.005 "fe9558fb-51d5-4a34-920c-80a2b491df17" 00:29:29.005 ], 00:29:29.005 "product_name": "NVMe disk", 00:29:29.005 "block_size": 512, 00:29:29.005 "num_blocks": 2097152, 00:29:29.005 "uuid": "fe9558fb-51d5-4a34-920c-80a2b491df17", 00:29:29.005 "assigned_rate_limits": { 00:29:29.005 "rw_ios_per_sec": 0, 00:29:29.005 "rw_mbytes_per_sec": 0, 00:29:29.005 "r_mbytes_per_sec": 0, 00:29:29.005 "w_mbytes_per_sec": 0 00:29:29.005 }, 00:29:29.005 "claimed": false, 00:29:29.005 "zoned": false, 00:29:29.005 "supported_io_types": { 00:29:29.005 "read": true, 00:29:29.005 "write": true, 00:29:29.005 "unmap": false, 00:29:29.005 "flush": true, 00:29:29.005 "reset": true, 00:29:29.005 "nvme_admin": true, 00:29:29.005 "nvme_io": true, 00:29:29.005 "nvme_io_md": false, 00:29:29.005 "write_zeroes": true, 00:29:29.005 "zcopy": false, 00:29:29.005 "get_zone_info": false, 00:29:29.005 "zone_management": false, 00:29:29.005 "zone_append": false, 00:29:29.005 "compare": true, 00:29:29.005 "compare_and_write": true, 00:29:29.005 "abort": true, 00:29:29.005 "seek_hole": false, 00:29:29.005 "seek_data": false, 00:29:29.005 "copy": true, 00:29:29.005 "nvme_iov_md": false 00:29:29.005 }, 00:29:29.005 "memory_domains": [ 00:29:29.005 { 00:29:29.005 "dma_device_id": "system", 00:29:29.005 "dma_device_type": 1 00:29:29.005 } 00:29:29.005 ], 00:29:29.005 "driver_specific": { 00:29:29.005 "nvme": [ 00:29:29.005 { 00:29:29.005 "trid": { 00:29:29.005 "trtype": "TCP", 00:29:29.005 "adrfam": "IPv4", 00:29:29.005 "traddr": "10.0.0.2", 00:29:29.005 "trsvcid": "4420", 00:29:29.005 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:29.005 }, 00:29:29.005 "ctrlr_data": { 00:29:29.005 "cntlid": 1, 00:29:29.005 "vendor_id": "0x8086", 00:29:29.005 "model_number": "SPDK bdev Controller", 00:29:29.005 "serial_number": "00000000000000000000", 00:29:29.005 "firmware_revision": "24.09", 00:29:29.005 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:29.005 "oacs": { 00:29:29.005 "security": 0, 00:29:29.005 "format": 0, 00:29:29.005 "firmware": 0, 00:29:29.005 "ns_manage": 0 00:29:29.005 }, 00:29:29.005 "multi_ctrlr": true, 00:29:29.005 "ana_reporting": false 00:29:29.005 }, 00:29:29.005 "vs": { 00:29:29.005 "nvme_version": "1.3" 00:29:29.005 }, 00:29:29.005 "ns_data": { 00:29:29.005 "id": 1, 00:29:29.005 "can_share": true 00:29:29.005 } 00:29:29.005 } 00:29:29.005 ], 00:29:29.005 "mp_policy": "active_passive" 00:29:29.005 } 00:29:29.005 } 00:29:29.005 ] 00:29:29.005 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.005 10:38:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:29.005 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.005 10:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:29.005 [2024-07-14 10:38:13.903977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:29.005 [2024-07-14 10:38:13.904032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe14ce0 (9): Bad file descriptor 00:29:29.264 [2024-07-14 10:38:14.036297] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:29.264 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:29.265 [ 00:29:29.265 { 00:29:29.265 "name": "nvme0n1", 00:29:29.265 "aliases": [ 00:29:29.265 "fe9558fb-51d5-4a34-920c-80a2b491df17" 00:29:29.265 ], 00:29:29.265 "product_name": "NVMe disk", 00:29:29.265 "block_size": 512, 00:29:29.265 "num_blocks": 2097152, 00:29:29.265 "uuid": "fe9558fb-51d5-4a34-920c-80a2b491df17", 00:29:29.265 "assigned_rate_limits": { 00:29:29.265 "rw_ios_per_sec": 0, 00:29:29.265 "rw_mbytes_per_sec": 0, 00:29:29.265 "r_mbytes_per_sec": 0, 00:29:29.265 "w_mbytes_per_sec": 0 00:29:29.265 }, 00:29:29.265 "claimed": false, 00:29:29.265 "zoned": false, 00:29:29.265 "supported_io_types": { 00:29:29.265 "read": true, 00:29:29.265 "write": true, 00:29:29.265 "unmap": false, 00:29:29.265 "flush": true, 00:29:29.265 "reset": true, 00:29:29.265 "nvme_admin": true, 00:29:29.265 "nvme_io": true, 00:29:29.265 "nvme_io_md": false, 00:29:29.265 "write_zeroes": true, 00:29:29.265 "zcopy": false, 00:29:29.265 "get_zone_info": false, 00:29:29.265 "zone_management": false, 00:29:29.265 "zone_append": false, 00:29:29.265 "compare": true, 00:29:29.265 "compare_and_write": true, 00:29:29.265 "abort": true, 00:29:29.265 "seek_hole": false, 00:29:29.265 "seek_data": false, 00:29:29.265 "copy": true, 00:29:29.265 "nvme_iov_md": false 00:29:29.265 }, 00:29:29.265 "memory_domains": [ 00:29:29.265 { 00:29:29.265 "dma_device_id": "system", 00:29:29.265 "dma_device_type": 1 00:29:29.265 } 00:29:29.265 ], 00:29:29.265 "driver_specific": { 00:29:29.265 "nvme": [ 00:29:29.265 { 00:29:29.265 "trid": { 00:29:29.265 "trtype": "TCP", 00:29:29.265 "adrfam": "IPv4", 00:29:29.265 "traddr": "10.0.0.2", 00:29:29.265 "trsvcid": "4420", 00:29:29.265 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:29.265 }, 00:29:29.265 "ctrlr_data": { 00:29:29.265 "cntlid": 2, 00:29:29.265 "vendor_id": "0x8086", 00:29:29.265 "model_number": "SPDK bdev Controller", 00:29:29.265 "serial_number": "00000000000000000000", 00:29:29.265 "firmware_revision": "24.09", 00:29:29.265 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:29.265 "oacs": { 00:29:29.265 "security": 0, 00:29:29.265 "format": 0, 00:29:29.265 "firmware": 0, 00:29:29.265 "ns_manage": 0 00:29:29.265 }, 00:29:29.265 "multi_ctrlr": true, 00:29:29.265 "ana_reporting": false 00:29:29.265 }, 00:29:29.265 "vs": { 00:29:29.265 "nvme_version": "1.3" 00:29:29.265 }, 00:29:29.265 "ns_data": { 00:29:29.265 "id": 1, 00:29:29.265 "can_share": true 00:29:29.265 } 00:29:29.265 } 00:29:29.265 ], 00:29:29.265 "mp_policy": "active_passive" 00:29:29.265 } 00:29:29.265 } 00:29:29.265 ] 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.iI4VEbF42b 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.iI4VEbF42b 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:29.265 [2024-07-14 10:38:14.096568] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:29.265 [2024-07-14 10:38:14.096669] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iI4VEbF42b 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:29.265 [2024-07-14 10:38:14.104586] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iI4VEbF42b 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:29.265 [2024-07-14 10:38:14.116635] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:29.265 [2024-07-14 10:38:14.116667] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:29:29.265 nvme0n1 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:29.265 [ 00:29:29.265 { 00:29:29.265 "name": "nvme0n1", 00:29:29.265 "aliases": [ 00:29:29.265 "fe9558fb-51d5-4a34-920c-80a2b491df17" 00:29:29.265 ], 00:29:29.265 "product_name": "NVMe disk", 00:29:29.265 "block_size": 512, 00:29:29.265 "num_blocks": 2097152, 00:29:29.265 "uuid": "fe9558fb-51d5-4a34-920c-80a2b491df17", 00:29:29.265 "assigned_rate_limits": { 00:29:29.265 "rw_ios_per_sec": 0, 00:29:29.265 "rw_mbytes_per_sec": 0, 00:29:29.265 "r_mbytes_per_sec": 0, 00:29:29.265 "w_mbytes_per_sec": 0 00:29:29.265 }, 00:29:29.265 "claimed": false, 00:29:29.265 "zoned": false, 00:29:29.265 "supported_io_types": { 00:29:29.265 "read": true, 00:29:29.265 "write": true, 00:29:29.265 "unmap": false, 00:29:29.265 "flush": true, 00:29:29.265 "reset": true, 00:29:29.265 "nvme_admin": true, 00:29:29.265 "nvme_io": true, 00:29:29.265 "nvme_io_md": false, 00:29:29.265 "write_zeroes": true, 00:29:29.265 "zcopy": false, 00:29:29.265 "get_zone_info": false, 00:29:29.265 "zone_management": false, 00:29:29.265 "zone_append": false, 00:29:29.265 "compare": true, 00:29:29.265 "compare_and_write": true, 00:29:29.265 "abort": true, 00:29:29.265 "seek_hole": false, 00:29:29.265 "seek_data": false, 00:29:29.265 "copy": true, 00:29:29.265 "nvme_iov_md": false 00:29:29.265 }, 00:29:29.265 "memory_domains": [ 00:29:29.265 { 00:29:29.265 "dma_device_id": "system", 00:29:29.265 "dma_device_type": 1 00:29:29.265 } 00:29:29.265 ], 00:29:29.265 "driver_specific": { 00:29:29.265 "nvme": [ 00:29:29.265 { 00:29:29.265 "trid": { 00:29:29.265 "trtype": "TCP", 00:29:29.265 "adrfam": "IPv4", 00:29:29.265 "traddr": "10.0.0.2", 00:29:29.265 "trsvcid": "4421", 00:29:29.265 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:29.265 }, 00:29:29.265 "ctrlr_data": { 00:29:29.265 "cntlid": 3, 00:29:29.265 "vendor_id": "0x8086", 00:29:29.265 "model_number": "SPDK bdev Controller", 00:29:29.265 "serial_number": "00000000000000000000", 00:29:29.265 "firmware_revision": "24.09", 00:29:29.265 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:29.265 "oacs": { 00:29:29.265 "security": 0, 00:29:29.265 "format": 0, 00:29:29.265 "firmware": 0, 00:29:29.265 "ns_manage": 0 00:29:29.265 }, 00:29:29.265 "multi_ctrlr": true, 00:29:29.265 "ana_reporting": false 00:29:29.265 }, 00:29:29.265 "vs": { 00:29:29.265 "nvme_version": "1.3" 00:29:29.265 }, 00:29:29.265 "ns_data": { 00:29:29.265 "id": 1, 00:29:29.265 "can_share": true 00:29:29.265 } 00:29:29.265 } 00:29:29.265 ], 00:29:29.265 "mp_policy": "active_passive" 00:29:29.265 } 00:29:29.265 } 00:29:29.265 ] 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.iI4VEbF42b 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:29.265 10:38:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:29.265 rmmod nvme_tcp 00:29:29.525 rmmod nvme_fabrics 00:29:29.525 rmmod nvme_keyring 00:29:29.525 10:38:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:29.525 10:38:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:29:29.525 10:38:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:29:29.525 10:38:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2533861 ']' 00:29:29.525 10:38:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2533861 00:29:29.525 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 2533861 ']' 00:29:29.525 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 2533861 00:29:29.525 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:29:29.525 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:29.525 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2533861 00:29:29.525 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:29.525 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:29.525 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2533861' 00:29:29.525 killing process with pid 2533861 00:29:29.525 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 2533861 00:29:29.525 [2024-07-14 10:38:14.346914] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:29:29.525 [2024-07-14 10:38:14.346938] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:29.525 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 2533861 00:29:29.785 10:38:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:29.785 10:38:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:29.785 10:38:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:29.785 10:38:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:29.785 10:38:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:29.785 10:38:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.785 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:29.785 10:38:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.690 10:38:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:31.690 00:29:31.690 real 0m9.186s 00:29:31.690 user 0m2.896s 00:29:31.690 sys 0m4.677s 00:29:31.690 10:38:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:31.690 10:38:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:31.690 ************************************ 00:29:31.690 END TEST nvmf_async_init 00:29:31.690 ************************************ 00:29:31.690 10:38:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:31.690 10:38:16 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:31.690 10:38:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:31.690 10:38:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:31.690 10:38:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:31.690 ************************************ 00:29:31.690 START TEST dma 00:29:31.690 ************************************ 00:29:31.690 10:38:16 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:31.949 * Looking for test storage... 00:29:31.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:31.949 10:38:16 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:31.949 10:38:16 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:29:31.949 10:38:16 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:31.949 10:38:16 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:31.949 10:38:16 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:31.949 10:38:16 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:31.949 10:38:16 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:31.949 10:38:16 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:31.950 10:38:16 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:31.950 10:38:16 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:31.950 10:38:16 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:31.950 10:38:16 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:31.950 10:38:16 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:31.950 10:38:16 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:31.950 10:38:16 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:31.950 10:38:16 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:31.950 10:38:16 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:31.950 10:38:16 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:31.950 10:38:16 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:31.950 10:38:16 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:31.950 10:38:16 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:31.950 10:38:16 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:31.950 10:38:16 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.950 10:38:16 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.950 10:38:16 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.950 10:38:16 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:29:31.950 10:38:16 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.950 10:38:16 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:29:31.950 10:38:16 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:31.950 10:38:16 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:31.950 10:38:16 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:31.950 10:38:16 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:31.950 10:38:16 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:31.950 10:38:16 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:31.950 10:38:16 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:31.950 10:38:16 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:31.950 10:38:16 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:31.950 10:38:16 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:29:31.950 00:29:31.950 real 0m0.121s 00:29:31.950 user 0m0.054s 00:29:31.950 sys 0m0.076s 00:29:31.950 10:38:16 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:31.950 10:38:16 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:29:31.950 ************************************ 00:29:31.950 END TEST dma 00:29:31.950 ************************************ 00:29:31.950 10:38:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:31.950 10:38:16 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:31.950 10:38:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:31.950 10:38:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:31.950 10:38:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:31.950 ************************************ 00:29:31.950 START TEST nvmf_identify 00:29:31.950 ************************************ 00:29:31.950 10:38:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:31.950 * Looking for test storage... 00:29:31.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:31.950 10:38:16 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:32.209 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:32.209 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:32.209 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:32.209 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:32.209 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:32.209 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:32.209 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:32.209 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:32.209 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:32.209 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:32.209 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:32.209 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:32.209 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:32.209 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:32.209 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:32.209 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:32.209 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:32.209 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:32.209 10:38:16 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:32.209 10:38:16 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:32.209 10:38:16 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:32.209 10:38:16 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.209 10:38:16 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.209 10:38:16 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.209 10:38:16 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:32.209 10:38:16 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.209 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:29:32.209 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:32.209 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:32.210 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:32.210 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:32.210 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:32.210 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:32.210 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:32.210 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:32.210 10:38:16 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:32.210 10:38:16 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:32.210 10:38:16 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:32.210 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:32.210 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:32.210 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:32.210 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:32.210 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:32.210 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.210 10:38:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:32.210 10:38:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.210 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:32.210 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:32.210 10:38:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:29:32.210 10:38:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:37.483 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:37.483 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:37.483 Found net devices under 0000:86:00.0: cvl_0_0 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:37.483 Found net devices under 0000:86:00.1: cvl_0_1 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:37.483 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:37.742 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:37.742 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:37.742 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:37.742 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:37.742 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:37.742 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:37.742 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:37.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:37.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:29:37.742 00:29:37.742 --- 10.0.0.2 ping statistics --- 00:29:37.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.742 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:29:37.742 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:37.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:37.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:29:37.742 00:29:37.742 --- 10.0.0.1 ping statistics --- 00:29:37.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.742 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:29:37.742 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:37.742 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:29:37.742 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:37.743 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:37.743 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:37.743 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:37.743 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:37.743 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:37.743 10:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:37.743 10:38:22 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:37.743 10:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:37.743 10:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:37.743 10:38:22 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2537630 00:29:37.743 10:38:22 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:37.743 10:38:22 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:37.743 10:38:22 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2537630 00:29:37.743 10:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 2537630 ']' 00:29:37.743 10:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.743 10:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:37.743 10:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.743 10:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:37.743 10:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:37.743 [2024-07-14 10:38:22.716095] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:29:37.743 [2024-07-14 10:38:22.716144] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:38.001 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.001 [2024-07-14 10:38:22.785535] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:38.001 [2024-07-14 10:38:22.828321] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:38.001 [2024-07-14 10:38:22.828359] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:38.001 [2024-07-14 10:38:22.828367] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:38.001 [2024-07-14 10:38:22.828373] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:38.001 [2024-07-14 10:38:22.828379] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:38.001 [2024-07-14 10:38:22.828425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:38.001 [2024-07-14 10:38:22.828536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:38.001 [2024-07-14 10:38:22.828641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.001 [2024-07-14 10:38:22.828643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:38.001 10:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:38.001 10:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:29:38.001 10:38:22 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:38.001 10:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.001 10:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:38.001 [2024-07-14 10:38:22.935198] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.001 10:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.001 10:38:22 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:38.001 10:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:38.001 10:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:38.001 10:38:22 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:38.001 10:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.001 10:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:38.262 Malloc0 00:29:38.262 10:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.262 10:38:22 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:38.262 10:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.262 10:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:38.262 10:38:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.262 10:38:23 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:38.262 10:38:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.262 10:38:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:38.262 10:38:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.262 10:38:23 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:38.262 10:38:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.262 10:38:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:38.262 [2024-07-14 10:38:23.023070] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:38.262 10:38:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.262 10:38:23 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:38.262 10:38:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.262 10:38:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:38.262 10:38:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.262 10:38:23 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:38.262 10:38:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.262 10:38:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:38.262 [ 00:29:38.262 { 00:29:38.262 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:38.262 "subtype": "Discovery", 00:29:38.262 "listen_addresses": [ 00:29:38.262 { 00:29:38.262 "trtype": "TCP", 00:29:38.262 "adrfam": "IPv4", 00:29:38.262 "traddr": "10.0.0.2", 00:29:38.262 "trsvcid": "4420" 00:29:38.262 } 00:29:38.262 ], 00:29:38.262 "allow_any_host": true, 00:29:38.262 "hosts": [] 00:29:38.262 }, 00:29:38.262 { 00:29:38.262 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:38.262 "subtype": "NVMe", 00:29:38.262 "listen_addresses": [ 00:29:38.262 { 00:29:38.262 "trtype": "TCP", 00:29:38.262 "adrfam": "IPv4", 00:29:38.262 "traddr": "10.0.0.2", 00:29:38.262 "trsvcid": "4420" 00:29:38.262 } 00:29:38.262 ], 00:29:38.262 "allow_any_host": true, 00:29:38.262 "hosts": [], 00:29:38.262 "serial_number": "SPDK00000000000001", 00:29:38.262 "model_number": "SPDK bdev Controller", 00:29:38.262 "max_namespaces": 32, 00:29:38.262 "min_cntlid": 1, 00:29:38.262 "max_cntlid": 65519, 00:29:38.262 "namespaces": [ 00:29:38.262 { 00:29:38.262 "nsid": 1, 00:29:38.262 "bdev_name": "Malloc0", 00:29:38.262 "name": "Malloc0", 00:29:38.262 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:38.262 "eui64": "ABCDEF0123456789", 00:29:38.262 "uuid": "9d95d0b9-b432-41ac-9cfe-7822c6ec2259" 00:29:38.262 } 00:29:38.262 ] 00:29:38.262 } 00:29:38.262 ] 00:29:38.263 10:38:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.263 10:38:23 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:38.263 [2024-07-14 10:38:23.074748] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:29:38.263 [2024-07-14 10:38:23.074795] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2537652 ] 00:29:38.263 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.263 [2024-07-14 10:38:23.105760] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:29:38.263 [2024-07-14 10:38:23.105811] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:38.263 [2024-07-14 10:38:23.105816] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:38.263 [2024-07-14 10:38:23.105826] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:38.263 [2024-07-14 10:38:23.105832] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:38.263 [2024-07-14 10:38:23.106137] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:29:38.263 [2024-07-14 10:38:23.106165] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2158af0 0 00:29:38.263 [2024-07-14 10:38:23.113233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:38.263 [2024-07-14 10:38:23.113244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:38.263 [2024-07-14 10:38:23.113249] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:38.263 [2024-07-14 10:38:23.113252] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:38.263 [2024-07-14 10:38:23.113288] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.263 [2024-07-14 10:38:23.113293] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.263 [2024-07-14 10:38:23.113297] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2158af0) 00:29:38.263 [2024-07-14 10:38:23.113309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:38.263 [2024-07-14 10:38:23.113325] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c5340, cid 0, qid 0 00:29:38.263 [2024-07-14 10:38:23.120235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.263 [2024-07-14 10:38:23.120242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.263 [2024-07-14 10:38:23.120246] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.263 [2024-07-14 10:38:23.120250] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c5340) on tqpair=0x2158af0 00:29:38.263 [2024-07-14 10:38:23.120262] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:38.263 [2024-07-14 10:38:23.120269] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:29:38.263 [2024-07-14 10:38:23.120274] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:29:38.263 [2024-07-14 10:38:23.120286] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.263 [2024-07-14 10:38:23.120290] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.263 [2024-07-14 10:38:23.120293] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2158af0) 00:29:38.263 [2024-07-14 10:38:23.120300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.263 [2024-07-14 10:38:23.120312] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c5340, cid 0, qid 0 00:29:38.263 [2024-07-14 10:38:23.120426] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.263 [2024-07-14 10:38:23.120432] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.263 [2024-07-14 10:38:23.120435] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.263 [2024-07-14 10:38:23.120439] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c5340) on tqpair=0x2158af0 00:29:38.263 [2024-07-14 10:38:23.120443] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:29:38.263 [2024-07-14 10:38:23.120449] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:29:38.263 [2024-07-14 10:38:23.120456] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.263 [2024-07-14 10:38:23.120459] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.263 [2024-07-14 10:38:23.120464] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2158af0) 00:29:38.263 [2024-07-14 10:38:23.120470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.263 [2024-07-14 10:38:23.120480] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c5340, cid 0, qid 0 00:29:38.263 [2024-07-14 10:38:23.120573] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.263 [2024-07-14 10:38:23.120579] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.263 [2024-07-14 10:38:23.120581] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.263 [2024-07-14 10:38:23.120585] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c5340) on tqpair=0x2158af0 00:29:38.263 [2024-07-14 10:38:23.120589] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:29:38.263 [2024-07-14 10:38:23.120595] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:29:38.263 [2024-07-14 10:38:23.120601] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.263 [2024-07-14 10:38:23.120605] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.263 [2024-07-14 10:38:23.120608] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2158af0) 00:29:38.263 [2024-07-14 10:38:23.120613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.263 [2024-07-14 10:38:23.120622] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c5340, cid 0, qid 0 00:29:38.263 [2024-07-14 10:38:23.120723] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.263 [2024-07-14 10:38:23.120729] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.263 [2024-07-14 10:38:23.120732] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.263 [2024-07-14 10:38:23.120735] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c5340) on tqpair=0x2158af0 00:29:38.263 [2024-07-14 10:38:23.120740] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:38.263 [2024-07-14 10:38:23.120747] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.263 [2024-07-14 10:38:23.120751] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.263 [2024-07-14 10:38:23.120754] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2158af0) 00:29:38.263 [2024-07-14 10:38:23.120760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.263 [2024-07-14 10:38:23.120768] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c5340, cid 0, qid 0 00:29:38.263 [2024-07-14 10:38:23.120831] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.263 [2024-07-14 10:38:23.120837] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.263 [2024-07-14 10:38:23.120840] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.263 [2024-07-14 10:38:23.120843] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c5340) on tqpair=0x2158af0 00:29:38.263 [2024-07-14 10:38:23.120848] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:29:38.263 [2024-07-14 10:38:23.120852] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:29:38.263 [2024-07-14 10:38:23.120858] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:38.263 [2024-07-14 10:38:23.120963] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:29:38.263 [2024-07-14 10:38:23.120967] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:38.263 [2024-07-14 10:38:23.120977] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.263 [2024-07-14 10:38:23.120980] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.263 [2024-07-14 10:38:23.120983] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2158af0) 00:29:38.263 [2024-07-14 10:38:23.120989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.263 [2024-07-14 10:38:23.120998] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c5340, cid 0, qid 0 00:29:38.263 [2024-07-14 10:38:23.121066] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.263 [2024-07-14 10:38:23.121072] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.263 [2024-07-14 10:38:23.121075] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.263 [2024-07-14 10:38:23.121078] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c5340) on tqpair=0x2158af0 00:29:38.263 [2024-07-14 10:38:23.121083] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:38.263 [2024-07-14 10:38:23.121090] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.263 [2024-07-14 10:38:23.121094] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.263 [2024-07-14 10:38:23.121097] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2158af0) 00:29:38.263 [2024-07-14 10:38:23.121102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.263 [2024-07-14 10:38:23.121112] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c5340, cid 0, qid 0 00:29:38.263 [2024-07-14 10:38:23.121216] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.263 [2024-07-14 10:38:23.121222] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.263 [2024-07-14 10:38:23.121231] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.263 [2024-07-14 10:38:23.121234] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c5340) on tqpair=0x2158af0 00:29:38.263 [2024-07-14 10:38:23.121238] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:38.263 [2024-07-14 10:38:23.121242] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:29:38.263 [2024-07-14 10:38:23.121249] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:29:38.263 [2024-07-14 10:38:23.121256] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:29:38.263 [2024-07-14 10:38:23.121264] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.263 [2024-07-14 10:38:23.121268] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2158af0) 00:29:38.263 [2024-07-14 10:38:23.121273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.263 [2024-07-14 10:38:23.121283] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c5340, cid 0, qid 0 00:29:38.263 [2024-07-14 10:38:23.121381] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:38.263 [2024-07-14 10:38:23.121387] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:38.263 [2024-07-14 10:38:23.121390] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.121394] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2158af0): datao=0, datal=4096, cccid=0 00:29:38.264 [2024-07-14 10:38:23.121398] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21c5340) on tqpair(0x2158af0): expected_datao=0, payload_size=4096 00:29:38.264 [2024-07-14 10:38:23.121404] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.121420] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.121424] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.162370] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.264 [2024-07-14 10:38:23.162381] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.264 [2024-07-14 10:38:23.162384] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.162387] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c5340) on tqpair=0x2158af0 00:29:38.264 [2024-07-14 10:38:23.162395] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:29:38.264 [2024-07-14 10:38:23.162402] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:29:38.264 [2024-07-14 10:38:23.162407] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:29:38.264 [2024-07-14 10:38:23.162411] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:29:38.264 [2024-07-14 10:38:23.162415] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:29:38.264 [2024-07-14 10:38:23.162419] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:29:38.264 [2024-07-14 10:38:23.162428] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:29:38.264 [2024-07-14 10:38:23.162435] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.162438] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.162442] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2158af0) 00:29:38.264 [2024-07-14 10:38:23.162450] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:38.264 [2024-07-14 10:38:23.162461] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c5340, cid 0, qid 0 00:29:38.264 [2024-07-14 10:38:23.162536] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.264 [2024-07-14 10:38:23.162542] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.264 [2024-07-14 10:38:23.162545] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.162549] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c5340) on tqpair=0x2158af0 00:29:38.264 [2024-07-14 10:38:23.162555] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.162559] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.162562] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2158af0) 00:29:38.264 [2024-07-14 10:38:23.162567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.264 [2024-07-14 10:38:23.162572] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.162575] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.162578] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2158af0) 00:29:38.264 [2024-07-14 10:38:23.162583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.264 [2024-07-14 10:38:23.162588] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.162591] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.162594] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2158af0) 00:29:38.264 [2024-07-14 10:38:23.162598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.264 [2024-07-14 10:38:23.162605] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.162609] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.162612] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2158af0) 00:29:38.264 [2024-07-14 10:38:23.162617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.264 [2024-07-14 10:38:23.162621] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:29:38.264 [2024-07-14 10:38:23.162631] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:38.264 [2024-07-14 10:38:23.162636] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.162640] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2158af0) 00:29:38.264 [2024-07-14 10:38:23.162645] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.264 [2024-07-14 10:38:23.162656] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c5340, cid 0, qid 0 00:29:38.264 [2024-07-14 10:38:23.162660] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c54c0, cid 1, qid 0 00:29:38.264 [2024-07-14 10:38:23.162664] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c5640, cid 2, qid 0 00:29:38.264 [2024-07-14 10:38:23.162668] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c57c0, cid 3, qid 0 00:29:38.264 [2024-07-14 10:38:23.162672] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c5940, cid 4, qid 0 00:29:38.264 [2024-07-14 10:38:23.162774] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.264 [2024-07-14 10:38:23.162780] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.264 [2024-07-14 10:38:23.162783] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.162786] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c5940) on tqpair=0x2158af0 00:29:38.264 [2024-07-14 10:38:23.162790] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:29:38.264 [2024-07-14 10:38:23.162795] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:29:38.264 [2024-07-14 10:38:23.162804] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.162808] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2158af0) 00:29:38.264 [2024-07-14 10:38:23.162813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.264 [2024-07-14 10:38:23.162823] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c5940, cid 4, qid 0 00:29:38.264 [2024-07-14 10:38:23.162896] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:38.264 [2024-07-14 10:38:23.162902] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:38.264 [2024-07-14 10:38:23.162905] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.162908] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2158af0): datao=0, datal=4096, cccid=4 00:29:38.264 [2024-07-14 10:38:23.162912] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21c5940) on tqpair(0x2158af0): expected_datao=0, payload_size=4096 00:29:38.264 [2024-07-14 10:38:23.162916] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.162921] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.162925] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.162958] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.264 [2024-07-14 10:38:23.162963] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.264 [2024-07-14 10:38:23.162966] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.162969] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c5940) on tqpair=0x2158af0 00:29:38.264 [2024-07-14 10:38:23.162982] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:29:38.264 [2024-07-14 10:38:23.163005] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.163009] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2158af0) 00:29:38.264 [2024-07-14 10:38:23.163014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.264 [2024-07-14 10:38:23.163020] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.163023] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.163026] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2158af0) 00:29:38.264 [2024-07-14 10:38:23.163031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.264 [2024-07-14 10:38:23.163044] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c5940, cid 4, qid 0 00:29:38.264 [2024-07-14 10:38:23.163049] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c5ac0, cid 5, qid 0 00:29:38.264 [2024-07-14 10:38:23.163145] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:38.264 [2024-07-14 10:38:23.163151] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:38.264 [2024-07-14 10:38:23.163154] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.163157] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2158af0): datao=0, datal=1024, cccid=4 00:29:38.264 [2024-07-14 10:38:23.163161] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21c5940) on tqpair(0x2158af0): expected_datao=0, payload_size=1024 00:29:38.264 [2024-07-14 10:38:23.163164] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.163170] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.163173] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.163178] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.264 [2024-07-14 10:38:23.163182] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.264 [2024-07-14 10:38:23.163185] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.163188] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c5ac0) on tqpair=0x2158af0 00:29:38.264 [2024-07-14 10:38:23.207232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.264 [2024-07-14 10:38:23.207243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.264 [2024-07-14 10:38:23.207246] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.207250] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c5940) on tqpair=0x2158af0 00:29:38.264 [2024-07-14 10:38:23.207262] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.207265] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2158af0) 00:29:38.264 [2024-07-14 10:38:23.207273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.264 [2024-07-14 10:38:23.207289] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c5940, cid 4, qid 0 00:29:38.264 [2024-07-14 10:38:23.207479] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:38.264 [2024-07-14 10:38:23.207484] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:38.264 [2024-07-14 10:38:23.207490] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:38.264 [2024-07-14 10:38:23.207493] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2158af0): datao=0, datal=3072, cccid=4 00:29:38.264 [2024-07-14 10:38:23.207497] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21c5940) on tqpair(0x2158af0): expected_datao=0, payload_size=3072 00:29:38.265 [2024-07-14 10:38:23.207501] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.265 [2024-07-14 10:38:23.207506] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:38.265 [2024-07-14 10:38:23.207510] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:38.265 [2024-07-14 10:38:23.207534] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.265 [2024-07-14 10:38:23.207539] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.265 [2024-07-14 10:38:23.207542] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.265 [2024-07-14 10:38:23.207546] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c5940) on tqpair=0x2158af0 00:29:38.265 [2024-07-14 10:38:23.207553] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.265 [2024-07-14 10:38:23.207556] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2158af0) 00:29:38.265 [2024-07-14 10:38:23.207561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.265 [2024-07-14 10:38:23.207574] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c5940, cid 4, qid 0 00:29:38.265 [2024-07-14 10:38:23.207649] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:38.265 [2024-07-14 10:38:23.207654] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:38.265 [2024-07-14 10:38:23.207657] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:38.265 [2024-07-14 10:38:23.207660] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2158af0): datao=0, datal=8, cccid=4 00:29:38.265 [2024-07-14 10:38:23.207664] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21c5940) on tqpair(0x2158af0): expected_datao=0, payload_size=8 00:29:38.265 [2024-07-14 10:38:23.207667] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.265 [2024-07-14 10:38:23.207673] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:38.265 [2024-07-14 10:38:23.207676] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:38.530 [2024-07-14 10:38:23.248389] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.530 [2024-07-14 10:38:23.248404] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.530 [2024-07-14 10:38:23.248407] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.530 [2024-07-14 10:38:23.248411] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c5940) on tqpair=0x2158af0 00:29:38.530 ===================================================== 00:29:38.530 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:38.530 ===================================================== 00:29:38.530 Controller Capabilities/Features 00:29:38.530 ================================ 00:29:38.530 Vendor ID: 0000 00:29:38.530 Subsystem Vendor ID: 0000 00:29:38.530 Serial Number: .................... 00:29:38.530 Model Number: ........................................ 00:29:38.530 Firmware Version: 24.09 00:29:38.530 Recommended Arb Burst: 0 00:29:38.530 IEEE OUI Identifier: 00 00 00 00:29:38.530 Multi-path I/O 00:29:38.530 May have multiple subsystem ports: No 00:29:38.530 May have multiple controllers: No 00:29:38.530 Associated with SR-IOV VF: No 00:29:38.530 Max Data Transfer Size: 131072 00:29:38.530 Max Number of Namespaces: 0 00:29:38.530 Max Number of I/O Queues: 1024 00:29:38.530 NVMe Specification Version (VS): 1.3 00:29:38.530 NVMe Specification Version (Identify): 1.3 00:29:38.530 Maximum Queue Entries: 128 00:29:38.530 Contiguous Queues Required: Yes 00:29:38.530 Arbitration Mechanisms Supported 00:29:38.530 Weighted Round Robin: Not Supported 00:29:38.530 Vendor Specific: Not Supported 00:29:38.530 Reset Timeout: 15000 ms 00:29:38.530 Doorbell Stride: 4 bytes 00:29:38.530 NVM Subsystem Reset: Not Supported 00:29:38.530 Command Sets Supported 00:29:38.530 NVM Command Set: Supported 00:29:38.530 Boot Partition: Not Supported 00:29:38.530 Memory Page Size Minimum: 4096 bytes 00:29:38.530 Memory Page Size Maximum: 4096 bytes 00:29:38.530 Persistent Memory Region: Not Supported 00:29:38.530 Optional Asynchronous Events Supported 00:29:38.530 Namespace Attribute Notices: Not Supported 00:29:38.530 Firmware Activation Notices: Not Supported 00:29:38.530 ANA Change Notices: Not Supported 00:29:38.530 PLE Aggregate Log Change Notices: Not Supported 00:29:38.530 LBA Status Info Alert Notices: Not Supported 00:29:38.530 EGE Aggregate Log Change Notices: Not Supported 00:29:38.530 Normal NVM Subsystem Shutdown event: Not Supported 00:29:38.530 Zone Descriptor Change Notices: Not Supported 00:29:38.530 Discovery Log Change Notices: Supported 00:29:38.530 Controller Attributes 00:29:38.530 128-bit Host Identifier: Not Supported 00:29:38.530 Non-Operational Permissive Mode: Not Supported 00:29:38.530 NVM Sets: Not Supported 00:29:38.530 Read Recovery Levels: Not Supported 00:29:38.530 Endurance Groups: Not Supported 00:29:38.530 Predictable Latency Mode: Not Supported 00:29:38.530 Traffic Based Keep ALive: Not Supported 00:29:38.530 Namespace Granularity: Not Supported 00:29:38.530 SQ Associations: Not Supported 00:29:38.530 UUID List: Not Supported 00:29:38.530 Multi-Domain Subsystem: Not Supported 00:29:38.530 Fixed Capacity Management: Not Supported 00:29:38.530 Variable Capacity Management: Not Supported 00:29:38.530 Delete Endurance Group: Not Supported 00:29:38.530 Delete NVM Set: Not Supported 00:29:38.530 Extended LBA Formats Supported: Not Supported 00:29:38.530 Flexible Data Placement Supported: Not Supported 00:29:38.530 00:29:38.530 Controller Memory Buffer Support 00:29:38.530 ================================ 00:29:38.530 Supported: No 00:29:38.530 00:29:38.530 Persistent Memory Region Support 00:29:38.530 ================================ 00:29:38.530 Supported: No 00:29:38.530 00:29:38.530 Admin Command Set Attributes 00:29:38.530 ============================ 00:29:38.530 Security Send/Receive: Not Supported 00:29:38.530 Format NVM: Not Supported 00:29:38.530 Firmware Activate/Download: Not Supported 00:29:38.530 Namespace Management: Not Supported 00:29:38.530 Device Self-Test: Not Supported 00:29:38.530 Directives: Not Supported 00:29:38.530 NVMe-MI: Not Supported 00:29:38.530 Virtualization Management: Not Supported 00:29:38.530 Doorbell Buffer Config: Not Supported 00:29:38.530 Get LBA Status Capability: Not Supported 00:29:38.530 Command & Feature Lockdown Capability: Not Supported 00:29:38.530 Abort Command Limit: 1 00:29:38.530 Async Event Request Limit: 4 00:29:38.530 Number of Firmware Slots: N/A 00:29:38.530 Firmware Slot 1 Read-Only: N/A 00:29:38.530 Firmware Activation Without Reset: N/A 00:29:38.530 Multiple Update Detection Support: N/A 00:29:38.530 Firmware Update Granularity: No Information Provided 00:29:38.530 Per-Namespace SMART Log: No 00:29:38.530 Asymmetric Namespace Access Log Page: Not Supported 00:29:38.530 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:38.530 Command Effects Log Page: Not Supported 00:29:38.530 Get Log Page Extended Data: Supported 00:29:38.530 Telemetry Log Pages: Not Supported 00:29:38.530 Persistent Event Log Pages: Not Supported 00:29:38.530 Supported Log Pages Log Page: May Support 00:29:38.530 Commands Supported & Effects Log Page: Not Supported 00:29:38.530 Feature Identifiers & Effects Log Page:May Support 00:29:38.530 NVMe-MI Commands & Effects Log Page: May Support 00:29:38.530 Data Area 4 for Telemetry Log: Not Supported 00:29:38.530 Error Log Page Entries Supported: 128 00:29:38.530 Keep Alive: Not Supported 00:29:38.530 00:29:38.530 NVM Command Set Attributes 00:29:38.530 ========================== 00:29:38.530 Submission Queue Entry Size 00:29:38.530 Max: 1 00:29:38.530 Min: 1 00:29:38.530 Completion Queue Entry Size 00:29:38.530 Max: 1 00:29:38.530 Min: 1 00:29:38.530 Number of Namespaces: 0 00:29:38.530 Compare Command: Not Supported 00:29:38.530 Write Uncorrectable Command: Not Supported 00:29:38.530 Dataset Management Command: Not Supported 00:29:38.530 Write Zeroes Command: Not Supported 00:29:38.530 Set Features Save Field: Not Supported 00:29:38.530 Reservations: Not Supported 00:29:38.530 Timestamp: Not Supported 00:29:38.530 Copy: Not Supported 00:29:38.530 Volatile Write Cache: Not Present 00:29:38.530 Atomic Write Unit (Normal): 1 00:29:38.530 Atomic Write Unit (PFail): 1 00:29:38.530 Atomic Compare & Write Unit: 1 00:29:38.530 Fused Compare & Write: Supported 00:29:38.530 Scatter-Gather List 00:29:38.530 SGL Command Set: Supported 00:29:38.530 SGL Keyed: Supported 00:29:38.530 SGL Bit Bucket Descriptor: Not Supported 00:29:38.530 SGL Metadata Pointer: Not Supported 00:29:38.530 Oversized SGL: Not Supported 00:29:38.530 SGL Metadata Address: Not Supported 00:29:38.530 SGL Offset: Supported 00:29:38.530 Transport SGL Data Block: Not Supported 00:29:38.530 Replay Protected Memory Block: Not Supported 00:29:38.530 00:29:38.530 Firmware Slot Information 00:29:38.530 ========================= 00:29:38.530 Active slot: 0 00:29:38.530 00:29:38.530 00:29:38.530 Error Log 00:29:38.530 ========= 00:29:38.530 00:29:38.530 Active Namespaces 00:29:38.530 ================= 00:29:38.530 Discovery Log Page 00:29:38.530 ================== 00:29:38.530 Generation Counter: 2 00:29:38.530 Number of Records: 2 00:29:38.530 Record Format: 0 00:29:38.530 00:29:38.530 Discovery Log Entry 0 00:29:38.530 ---------------------- 00:29:38.530 Transport Type: 3 (TCP) 00:29:38.530 Address Family: 1 (IPv4) 00:29:38.530 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:38.530 Entry Flags: 00:29:38.530 Duplicate Returned Information: 1 00:29:38.530 Explicit Persistent Connection Support for Discovery: 1 00:29:38.530 Transport Requirements: 00:29:38.530 Secure Channel: Not Required 00:29:38.530 Port ID: 0 (0x0000) 00:29:38.530 Controller ID: 65535 (0xffff) 00:29:38.530 Admin Max SQ Size: 128 00:29:38.530 Transport Service Identifier: 4420 00:29:38.531 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:38.531 Transport Address: 10.0.0.2 00:29:38.531 Discovery Log Entry 1 00:29:38.531 ---------------------- 00:29:38.531 Transport Type: 3 (TCP) 00:29:38.531 Address Family: 1 (IPv4) 00:29:38.531 Subsystem Type: 2 (NVM Subsystem) 00:29:38.531 Entry Flags: 00:29:38.531 Duplicate Returned Information: 0 00:29:38.531 Explicit Persistent Connection Support for Discovery: 0 00:29:38.531 Transport Requirements: 00:29:38.531 Secure Channel: Not Required 00:29:38.531 Port ID: 0 (0x0000) 00:29:38.531 Controller ID: 65535 (0xffff) 00:29:38.531 Admin Max SQ Size: 128 00:29:38.531 Transport Service Identifier: 4420 00:29:38.531 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:38.531 Transport Address: 10.0.0.2 [2024-07-14 10:38:23.248496] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:29:38.531 [2024-07-14 10:38:23.248508] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c5340) on tqpair=0x2158af0 00:29:38.531 [2024-07-14 10:38:23.248515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.531 [2024-07-14 10:38:23.248520] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c54c0) on tqpair=0x2158af0 00:29:38.531 [2024-07-14 10:38:23.248524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.531 [2024-07-14 10:38:23.248528] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c5640) on tqpair=0x2158af0 00:29:38.531 [2024-07-14 10:38:23.248532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.531 [2024-07-14 10:38:23.248536] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c57c0) on tqpair=0x2158af0 00:29:38.531 [2024-07-14 10:38:23.248540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.531 [2024-07-14 10:38:23.248552] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.531 [2024-07-14 10:38:23.248555] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.531 [2024-07-14 10:38:23.248559] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2158af0) 00:29:38.531 [2024-07-14 10:38:23.248567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.531 [2024-07-14 10:38:23.248581] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c57c0, cid 3, qid 0 00:29:38.531 [2024-07-14 10:38:23.248644] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.531 [2024-07-14 10:38:23.248650] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.531 [2024-07-14 10:38:23.248653] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.531 [2024-07-14 10:38:23.248656] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c57c0) on tqpair=0x2158af0 00:29:38.531 [2024-07-14 10:38:23.248663] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.531 [2024-07-14 10:38:23.248666] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.531 [2024-07-14 10:38:23.248669] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2158af0) 00:29:38.531 [2024-07-14 10:38:23.248675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.531 [2024-07-14 10:38:23.248687] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c57c0, cid 3, qid 0 00:29:38.531 [2024-07-14 10:38:23.248763] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.531 [2024-07-14 10:38:23.248769] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.531 [2024-07-14 10:38:23.248772] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.531 [2024-07-14 10:38:23.248775] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c57c0) on tqpair=0x2158af0 00:29:38.531 [2024-07-14 10:38:23.248780] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:29:38.531 [2024-07-14 10:38:23.248783] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:29:38.531 [2024-07-14 10:38:23.248791] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.531 [2024-07-14 10:38:23.248795] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.531 [2024-07-14 10:38:23.248797] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2158af0) 00:29:38.531 [2024-07-14 10:38:23.248803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.531 [2024-07-14 10:38:23.248812] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c57c0, cid 3, qid 0 00:29:38.531 [2024-07-14 10:38:23.248875] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.531 [2024-07-14 10:38:23.248881] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.531 [2024-07-14 10:38:23.248884] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.531 [2024-07-14 10:38:23.248887] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c57c0) on tqpair=0x2158af0 00:29:38.531 [2024-07-14 10:38:23.248895] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.531 [2024-07-14 10:38:23.248899] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.531 [2024-07-14 10:38:23.248902] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2158af0) 00:29:38.531 [2024-07-14 10:38:23.248907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.531 [2024-07-14 10:38:23.248916] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c57c0, cid 3, qid 0 00:29:38.531 [2024-07-14 10:38:23.248980] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.531 [2024-07-14 10:38:23.248985] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.531 [2024-07-14 10:38:23.248990] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.531 [2024-07-14 10:38:23.248994] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c57c0) on tqpair=0x2158af0 00:29:38.531 [2024-07-14 10:38:23.249001] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.531 [2024-07-14 10:38:23.249005] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.531 [2024-07-14 10:38:23.249008] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2158af0) 00:29:38.531 [2024-07-14 10:38:23.249013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.531 [2024-07-14 10:38:23.249023] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c57c0, cid 3, qid 0 00:29:38.531 [2024-07-14 10:38:23.249093] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.531 [2024-07-14 10:38:23.249099] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.531 [2024-07-14 10:38:23.249102] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.531 [2024-07-14 10:38:23.249105] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c57c0) on tqpair=0x2158af0 00:29:38.531 [2024-07-14 10:38:23.249113] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.531 [2024-07-14 10:38:23.249116] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.531 [2024-07-14 10:38:23.249119] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2158af0) 00:29:38.531 [2024-07-14 10:38:23.249125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.531 [2024-07-14 10:38:23.249134] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c57c0, cid 3, qid 0 00:29:38.531 [2024-07-14 10:38:23.249203] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.531 [2024-07-14 10:38:23.249209] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.531 [2024-07-14 10:38:23.249212] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.531 [2024-07-14 10:38:23.249215] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c57c0) on tqpair=0x2158af0 00:29:38.531 [2024-07-14 10:38:23.249223] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.531 [2024-07-14 10:38:23.249234] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.531 [2024-07-14 10:38:23.249238] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2158af0) 00:29:38.531 [2024-07-14 10:38:23.249243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.531 [2024-07-14 10:38:23.249253] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c57c0, cid 3, qid 0 00:29:38.531 [2024-07-14 10:38:23.249323] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.531 [2024-07-14 10:38:23.249328] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.531 [2024-07-14 10:38:23.249331] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.531 [2024-07-14 10:38:23.249334] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c57c0) on tqpair=0x2158af0 00:29:38.531 [2024-07-14 10:38:23.249342] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.531 [2024-07-14 10:38:23.249346] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.531 [2024-07-14 10:38:23.249349] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2158af0) 00:29:38.531 [2024-07-14 10:38:23.249354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.531 [2024-07-14 10:38:23.249363] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c57c0, cid 3, qid 0 00:29:38.531 [2024-07-14 10:38:23.249426] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.531 [2024-07-14 10:38:23.249432] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.531 [2024-07-14 10:38:23.249434] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.531 [2024-07-14 10:38:23.249441] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c57c0) on tqpair=0x2158af0 00:29:38.531 [2024-07-14 10:38:23.249449] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.531 [2024-07-14 10:38:23.249453] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.531 [2024-07-14 10:38:23.249456] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2158af0) 00:29:38.531 [2024-07-14 10:38:23.249461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.531 [2024-07-14 10:38:23.249470] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c57c0, cid 3, qid 0 00:29:38.531 [2024-07-14 10:38:23.249540] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.531 [2024-07-14 10:38:23.249545] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.531 [2024-07-14 10:38:23.249548] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.531 [2024-07-14 10:38:23.249552] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c57c0) on tqpair=0x2158af0 00:29:38.531 [2024-07-14 10:38:23.249559] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.531 [2024-07-14 10:38:23.249563] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.531 [2024-07-14 10:38:23.249566] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2158af0) 00:29:38.531 [2024-07-14 10:38:23.249571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.531 [2024-07-14 10:38:23.249580] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c57c0, cid 3, qid 0 00:29:38.531 [2024-07-14 10:38:23.249648] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.532 [2024-07-14 10:38:23.249653] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.532 [2024-07-14 10:38:23.249656] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.249659] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c57c0) on tqpair=0x2158af0 00:29:38.532 [2024-07-14 10:38:23.249667] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.249671] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.249674] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2158af0) 00:29:38.532 [2024-07-14 10:38:23.249679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.532 [2024-07-14 10:38:23.249688] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c57c0, cid 3, qid 0 00:29:38.532 [2024-07-14 10:38:23.249757] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.532 [2024-07-14 10:38:23.249763] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.532 [2024-07-14 10:38:23.249766] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.249769] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c57c0) on tqpair=0x2158af0 00:29:38.532 [2024-07-14 10:38:23.249777] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.249780] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.249783] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2158af0) 00:29:38.532 [2024-07-14 10:38:23.249789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.532 [2024-07-14 10:38:23.249799] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c57c0, cid 3, qid 0 00:29:38.532 [2024-07-14 10:38:23.249866] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.532 [2024-07-14 10:38:23.249871] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.532 [2024-07-14 10:38:23.249874] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.249877] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c57c0) on tqpair=0x2158af0 00:29:38.532 [2024-07-14 10:38:23.249887] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.249891] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.249894] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2158af0) 00:29:38.532 [2024-07-14 10:38:23.249900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.532 [2024-07-14 10:38:23.249908] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c57c0, cid 3, qid 0 00:29:38.532 [2024-07-14 10:38:23.249976] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.532 [2024-07-14 10:38:23.249981] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.532 [2024-07-14 10:38:23.249984] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.249987] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c57c0) on tqpair=0x2158af0 00:29:38.532 [2024-07-14 10:38:23.249995] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.249999] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.250002] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2158af0) 00:29:38.532 [2024-07-14 10:38:23.250007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.532 [2024-07-14 10:38:23.250016] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c57c0, cid 3, qid 0 00:29:38.532 [2024-07-14 10:38:23.250084] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.532 [2024-07-14 10:38:23.250090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.532 [2024-07-14 10:38:23.250093] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.250096] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c57c0) on tqpair=0x2158af0 00:29:38.532 [2024-07-14 10:38:23.250104] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.250108] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.250111] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2158af0) 00:29:38.532 [2024-07-14 10:38:23.250116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.532 [2024-07-14 10:38:23.250125] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c57c0, cid 3, qid 0 00:29:38.532 [2024-07-14 10:38:23.250193] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.532 [2024-07-14 10:38:23.250198] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.532 [2024-07-14 10:38:23.250201] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.250204] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c57c0) on tqpair=0x2158af0 00:29:38.532 [2024-07-14 10:38:23.250212] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.250215] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.250218] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2158af0) 00:29:38.532 [2024-07-14 10:38:23.250228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.532 [2024-07-14 10:38:23.250238] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c57c0, cid 3, qid 0 00:29:38.532 [2024-07-14 10:38:23.250308] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.532 [2024-07-14 10:38:23.250313] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.532 [2024-07-14 10:38:23.250316] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.250320] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c57c0) on tqpair=0x2158af0 00:29:38.532 [2024-07-14 10:38:23.250327] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.250332] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.250335] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2158af0) 00:29:38.532 [2024-07-14 10:38:23.250341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.532 [2024-07-14 10:38:23.250350] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c57c0, cid 3, qid 0 00:29:38.532 [2024-07-14 10:38:23.250414] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.532 [2024-07-14 10:38:23.250420] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.532 [2024-07-14 10:38:23.250423] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.250426] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c57c0) on tqpair=0x2158af0 00:29:38.532 [2024-07-14 10:38:23.250434] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.250437] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.250440] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2158af0) 00:29:38.532 [2024-07-14 10:38:23.250446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.532 [2024-07-14 10:38:23.250454] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c57c0, cid 3, qid 0 00:29:38.532 [2024-07-14 10:38:23.250523] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.532 [2024-07-14 10:38:23.250529] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.532 [2024-07-14 10:38:23.250532] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.250535] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c57c0) on tqpair=0x2158af0 00:29:38.532 [2024-07-14 10:38:23.250543] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.250547] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.250549] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2158af0) 00:29:38.532 [2024-07-14 10:38:23.250555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.532 [2024-07-14 10:38:23.250565] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c57c0, cid 3, qid 0 00:29:38.532 [2024-07-14 10:38:23.250631] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.532 [2024-07-14 10:38:23.250637] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.532 [2024-07-14 10:38:23.250640] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.250643] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c57c0) on tqpair=0x2158af0 00:29:38.532 [2024-07-14 10:38:23.250651] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.250655] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.250658] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2158af0) 00:29:38.532 [2024-07-14 10:38:23.250663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.532 [2024-07-14 10:38:23.250672] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c57c0, cid 3, qid 0 00:29:38.532 [2024-07-14 10:38:23.250756] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.532 [2024-07-14 10:38:23.250761] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.532 [2024-07-14 10:38:23.250764] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.250767] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c57c0) on tqpair=0x2158af0 00:29:38.532 [2024-07-14 10:38:23.250776] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.250779] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.250784] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2158af0) 00:29:38.532 [2024-07-14 10:38:23.250790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.532 [2024-07-14 10:38:23.250799] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c57c0, cid 3, qid 0 00:29:38.532 [2024-07-14 10:38:23.250863] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.532 [2024-07-14 10:38:23.250869] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.532 [2024-07-14 10:38:23.250872] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.250875] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c57c0) on tqpair=0x2158af0 00:29:38.532 [2024-07-14 10:38:23.250883] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.250886] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.250889] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2158af0) 00:29:38.532 [2024-07-14 10:38:23.250895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.532 [2024-07-14 10:38:23.250904] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c57c0, cid 3, qid 0 00:29:38.532 [2024-07-14 10:38:23.250971] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.532 [2024-07-14 10:38:23.250977] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.532 [2024-07-14 10:38:23.250980] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.250983] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c57c0) on tqpair=0x2158af0 00:29:38.532 [2024-07-14 10:38:23.250991] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.250994] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.532 [2024-07-14 10:38:23.250997] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2158af0) 00:29:38.533 [2024-07-14 10:38:23.251003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.533 [2024-07-14 10:38:23.251012] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c57c0, cid 3, qid 0 00:29:38.533 [2024-07-14 10:38:23.251082] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.533 [2024-07-14 10:38:23.251088] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.533 [2024-07-14 10:38:23.251090] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.533 [2024-07-14 10:38:23.251094] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c57c0) on tqpair=0x2158af0 00:29:38.533 [2024-07-14 10:38:23.251101] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.533 [2024-07-14 10:38:23.251105] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.533 [2024-07-14 10:38:23.251108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2158af0) 00:29:38.533 [2024-07-14 10:38:23.251113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.533 [2024-07-14 10:38:23.251122] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c57c0, cid 3, qid 0 00:29:38.533 [2024-07-14 10:38:23.251183] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.533 [2024-07-14 10:38:23.251189] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.533 [2024-07-14 10:38:23.251191] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.533 [2024-07-14 10:38:23.251195] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c57c0) on tqpair=0x2158af0 00:29:38.533 [2024-07-14 10:38:23.251203] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.533 [2024-07-14 10:38:23.251207] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.533 [2024-07-14 10:38:23.251210] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2158af0) 00:29:38.533 [2024-07-14 10:38:23.251218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.533 [2024-07-14 10:38:23.255233] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21c57c0, cid 3, qid 0 00:29:38.533 [2024-07-14 10:38:23.255383] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.533 [2024-07-14 10:38:23.255389] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.533 [2024-07-14 10:38:23.255392] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.533 [2024-07-14 10:38:23.255395] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21c57c0) on tqpair=0x2158af0 00:29:38.533 [2024-07-14 10:38:23.255402] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:29:38.533 00:29:38.533 10:38:23 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:38.533 [2024-07-14 10:38:23.290853] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:29:38.533 [2024-07-14 10:38:23.290891] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2537654 ] 00:29:38.533 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.533 [2024-07-14 10:38:23.318462] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:29:38.533 [2024-07-14 10:38:23.318506] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:38.533 [2024-07-14 10:38:23.318510] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:38.533 [2024-07-14 10:38:23.318520] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:38.533 [2024-07-14 10:38:23.318526] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:38.533 [2024-07-14 10:38:23.318725] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:29:38.533 [2024-07-14 10:38:23.318750] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2067af0 0 00:29:38.533 [2024-07-14 10:38:23.325237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:38.533 [2024-07-14 10:38:23.325247] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:38.533 [2024-07-14 10:38:23.325251] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:38.533 [2024-07-14 10:38:23.325253] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:38.533 [2024-07-14 10:38:23.325280] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.533 [2024-07-14 10:38:23.325285] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.533 [2024-07-14 10:38:23.325288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2067af0) 00:29:38.533 [2024-07-14 10:38:23.325298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:38.533 [2024-07-14 10:38:23.325313] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4340, cid 0, qid 0 00:29:38.533 [2024-07-14 10:38:23.336235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.533 [2024-07-14 10:38:23.336245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.533 [2024-07-14 10:38:23.336248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.533 [2024-07-14 10:38:23.336251] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d4340) on tqpair=0x2067af0 00:29:38.533 [2024-07-14 10:38:23.336263] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:38.533 [2024-07-14 10:38:23.336269] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:29:38.533 [2024-07-14 10:38:23.336273] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:29:38.533 [2024-07-14 10:38:23.336284] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.533 [2024-07-14 10:38:23.336288] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.533 [2024-07-14 10:38:23.336291] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2067af0) 00:29:38.533 [2024-07-14 10:38:23.336298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.533 [2024-07-14 10:38:23.336311] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4340, cid 0, qid 0 00:29:38.533 [2024-07-14 10:38:23.336443] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.533 [2024-07-14 10:38:23.336450] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.533 [2024-07-14 10:38:23.336452] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.533 [2024-07-14 10:38:23.336456] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d4340) on tqpair=0x2067af0 00:29:38.533 [2024-07-14 10:38:23.336460] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:29:38.533 [2024-07-14 10:38:23.336466] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:29:38.533 [2024-07-14 10:38:23.336473] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.533 [2024-07-14 10:38:23.336476] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.533 [2024-07-14 10:38:23.336479] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2067af0) 00:29:38.533 [2024-07-14 10:38:23.336485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.533 [2024-07-14 10:38:23.336495] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4340, cid 0, qid 0 00:29:38.533 [2024-07-14 10:38:23.336563] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.533 [2024-07-14 10:38:23.336569] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.533 [2024-07-14 10:38:23.336572] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.533 [2024-07-14 10:38:23.336575] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d4340) on tqpair=0x2067af0 00:29:38.533 [2024-07-14 10:38:23.336579] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:29:38.533 [2024-07-14 10:38:23.336586] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:29:38.533 [2024-07-14 10:38:23.336592] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.533 [2024-07-14 10:38:23.336595] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.533 [2024-07-14 10:38:23.336598] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2067af0) 00:29:38.533 [2024-07-14 10:38:23.336604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.533 [2024-07-14 10:38:23.336615] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4340, cid 0, qid 0 00:29:38.533 [2024-07-14 10:38:23.336680] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.533 [2024-07-14 10:38:23.336686] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.533 [2024-07-14 10:38:23.336689] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.533 [2024-07-14 10:38:23.336692] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d4340) on tqpair=0x2067af0 00:29:38.533 [2024-07-14 10:38:23.336696] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:38.533 [2024-07-14 10:38:23.336706] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.533 [2024-07-14 10:38:23.336710] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.533 [2024-07-14 10:38:23.336713] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2067af0) 00:29:38.533 [2024-07-14 10:38:23.336719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.533 [2024-07-14 10:38:23.336728] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4340, cid 0, qid 0 00:29:38.533 [2024-07-14 10:38:23.336793] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.533 [2024-07-14 10:38:23.336799] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.533 [2024-07-14 10:38:23.336802] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.533 [2024-07-14 10:38:23.336805] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d4340) on tqpair=0x2067af0 00:29:38.533 [2024-07-14 10:38:23.336809] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:29:38.533 [2024-07-14 10:38:23.336813] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:29:38.533 [2024-07-14 10:38:23.336819] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:38.533 [2024-07-14 10:38:23.336924] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:29:38.533 [2024-07-14 10:38:23.336928] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:38.533 [2024-07-14 10:38:23.336934] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.533 [2024-07-14 10:38:23.336937] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.533 [2024-07-14 10:38:23.336940] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2067af0) 00:29:38.533 [2024-07-14 10:38:23.336946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.533 [2024-07-14 10:38:23.336956] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4340, cid 0, qid 0 00:29:38.533 [2024-07-14 10:38:23.337021] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.533 [2024-07-14 10:38:23.337026] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.534 [2024-07-14 10:38:23.337029] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.534 [2024-07-14 10:38:23.337032] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d4340) on tqpair=0x2067af0 00:29:38.534 [2024-07-14 10:38:23.337036] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:38.534 [2024-07-14 10:38:23.337044] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.534 [2024-07-14 10:38:23.337048] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.534 [2024-07-14 10:38:23.337051] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2067af0) 00:29:38.534 [2024-07-14 10:38:23.337056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.534 [2024-07-14 10:38:23.337065] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4340, cid 0, qid 0 00:29:38.534 [2024-07-14 10:38:23.337151] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.534 [2024-07-14 10:38:23.337156] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.534 [2024-07-14 10:38:23.337159] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.534 [2024-07-14 10:38:23.337162] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d4340) on tqpair=0x2067af0 00:29:38.534 [2024-07-14 10:38:23.337167] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:38.534 [2024-07-14 10:38:23.337172] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:29:38.534 [2024-07-14 10:38:23.337179] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:29:38.534 [2024-07-14 10:38:23.337186] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:29:38.534 [2024-07-14 10:38:23.337193] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.534 [2024-07-14 10:38:23.337196] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2067af0) 00:29:38.534 [2024-07-14 10:38:23.337202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.534 [2024-07-14 10:38:23.337213] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4340, cid 0, qid 0 00:29:38.534 [2024-07-14 10:38:23.337328] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:38.534 [2024-07-14 10:38:23.337333] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:38.534 [2024-07-14 10:38:23.337337] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:38.534 [2024-07-14 10:38:23.337340] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2067af0): datao=0, datal=4096, cccid=0 00:29:38.534 [2024-07-14 10:38:23.337344] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20d4340) on tqpair(0x2067af0): expected_datao=0, payload_size=4096 00:29:38.534 [2024-07-14 10:38:23.337348] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.534 [2024-07-14 10:38:23.337359] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:38.534 [2024-07-14 10:38:23.337362] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:38.534 [2024-07-14 10:38:23.378367] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.534 [2024-07-14 10:38:23.378380] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.534 [2024-07-14 10:38:23.378383] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.534 [2024-07-14 10:38:23.378387] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d4340) on tqpair=0x2067af0 00:29:38.534 [2024-07-14 10:38:23.378394] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:29:38.534 [2024-07-14 10:38:23.378401] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:29:38.534 [2024-07-14 10:38:23.378405] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:29:38.534 [2024-07-14 10:38:23.378409] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:29:38.534 [2024-07-14 10:38:23.378413] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:29:38.534 [2024-07-14 10:38:23.378417] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:29:38.534 [2024-07-14 10:38:23.378425] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:29:38.534 [2024-07-14 10:38:23.378432] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.534 [2024-07-14 10:38:23.378435] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.534 [2024-07-14 10:38:23.378439] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2067af0) 00:29:38.534 [2024-07-14 10:38:23.378445] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:38.534 [2024-07-14 10:38:23.378457] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4340, cid 0, qid 0 00:29:38.534 [2024-07-14 10:38:23.378520] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.534 [2024-07-14 10:38:23.378527] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.534 [2024-07-14 10:38:23.378530] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.534 [2024-07-14 10:38:23.378534] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d4340) on tqpair=0x2067af0 00:29:38.534 [2024-07-14 10:38:23.378539] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.534 [2024-07-14 10:38:23.378542] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.534 [2024-07-14 10:38:23.378546] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2067af0) 00:29:38.534 [2024-07-14 10:38:23.378551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.534 [2024-07-14 10:38:23.378556] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.534 [2024-07-14 10:38:23.378560] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.534 [2024-07-14 10:38:23.378563] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2067af0) 00:29:38.534 [2024-07-14 10:38:23.378568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.534 [2024-07-14 10:38:23.378572] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.534 [2024-07-14 10:38:23.378576] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.534 [2024-07-14 10:38:23.378579] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2067af0) 00:29:38.534 [2024-07-14 10:38:23.378584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.534 [2024-07-14 10:38:23.378588] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.534 [2024-07-14 10:38:23.378592] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.534 [2024-07-14 10:38:23.378595] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.534 [2024-07-14 10:38:23.378600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.534 [2024-07-14 10:38:23.378604] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:38.534 [2024-07-14 10:38:23.378614] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:38.534 [2024-07-14 10:38:23.378620] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.534 [2024-07-14 10:38:23.378623] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2067af0) 00:29:38.534 [2024-07-14 10:38:23.378629] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.534 [2024-07-14 10:38:23.378640] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4340, cid 0, qid 0 00:29:38.534 [2024-07-14 10:38:23.378645] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d44c0, cid 1, qid 0 00:29:38.534 [2024-07-14 10:38:23.378649] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4640, cid 2, qid 0 00:29:38.534 [2024-07-14 10:38:23.378653] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.534 [2024-07-14 10:38:23.378658] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4940, cid 4, qid 0 00:29:38.534 [2024-07-14 10:38:23.378758] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.534 [2024-07-14 10:38:23.378764] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.534 [2024-07-14 10:38:23.378768] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.534 [2024-07-14 10:38:23.378771] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d4940) on tqpair=0x2067af0 00:29:38.534 [2024-07-14 10:38:23.378775] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:29:38.534 [2024-07-14 10:38:23.378781] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:38.534 [2024-07-14 10:38:23.378789] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:29:38.534 [2024-07-14 10:38:23.378794] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:38.534 [2024-07-14 10:38:23.378800] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.534 [2024-07-14 10:38:23.378803] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.534 [2024-07-14 10:38:23.378806] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2067af0) 00:29:38.534 [2024-07-14 10:38:23.378812] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:38.534 [2024-07-14 10:38:23.378821] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4940, cid 4, qid 0 00:29:38.534 [2024-07-14 10:38:23.378887] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.535 [2024-07-14 10:38:23.378893] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.535 [2024-07-14 10:38:23.378896] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.378900] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d4940) on tqpair=0x2067af0 00:29:38.535 [2024-07-14 10:38:23.378951] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:29:38.535 [2024-07-14 10:38:23.378959] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:38.535 [2024-07-14 10:38:23.378966] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.378970] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2067af0) 00:29:38.535 [2024-07-14 10:38:23.378975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.535 [2024-07-14 10:38:23.378985] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4940, cid 4, qid 0 00:29:38.535 [2024-07-14 10:38:23.379097] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:38.535 [2024-07-14 10:38:23.379102] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:38.535 [2024-07-14 10:38:23.379105] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.379109] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2067af0): datao=0, datal=4096, cccid=4 00:29:38.535 [2024-07-14 10:38:23.379113] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20d4940) on tqpair(0x2067af0): expected_datao=0, payload_size=4096 00:29:38.535 [2024-07-14 10:38:23.379116] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.379122] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.379126] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.379140] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.535 [2024-07-14 10:38:23.379146] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.535 [2024-07-14 10:38:23.379149] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.379152] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d4940) on tqpair=0x2067af0 00:29:38.535 [2024-07-14 10:38:23.379159] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:29:38.535 [2024-07-14 10:38:23.379172] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:29:38.535 [2024-07-14 10:38:23.379181] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:29:38.535 [2024-07-14 10:38:23.379190] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.379194] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2067af0) 00:29:38.535 [2024-07-14 10:38:23.379200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.535 [2024-07-14 10:38:23.379210] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4940, cid 4, qid 0 00:29:38.535 [2024-07-14 10:38:23.379299] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:38.535 [2024-07-14 10:38:23.379305] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:38.535 [2024-07-14 10:38:23.379309] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.379312] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2067af0): datao=0, datal=4096, cccid=4 00:29:38.535 [2024-07-14 10:38:23.379316] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20d4940) on tqpair(0x2067af0): expected_datao=0, payload_size=4096 00:29:38.535 [2024-07-14 10:38:23.379319] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.379325] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.379328] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.379340] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.535 [2024-07-14 10:38:23.379346] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.535 [2024-07-14 10:38:23.379349] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.379352] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d4940) on tqpair=0x2067af0 00:29:38.535 [2024-07-14 10:38:23.379364] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:38.535 [2024-07-14 10:38:23.379373] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:38.535 [2024-07-14 10:38:23.379380] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.379383] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2067af0) 00:29:38.535 [2024-07-14 10:38:23.379389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.535 [2024-07-14 10:38:23.379398] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4940, cid 4, qid 0 00:29:38.535 [2024-07-14 10:38:23.379472] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:38.535 [2024-07-14 10:38:23.379478] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:38.535 [2024-07-14 10:38:23.379481] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.379484] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2067af0): datao=0, datal=4096, cccid=4 00:29:38.535 [2024-07-14 10:38:23.379488] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20d4940) on tqpair(0x2067af0): expected_datao=0, payload_size=4096 00:29:38.535 [2024-07-14 10:38:23.379492] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.379498] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.379501] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.379512] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.535 [2024-07-14 10:38:23.379517] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.535 [2024-07-14 10:38:23.379521] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.379524] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d4940) on tqpair=0x2067af0 00:29:38.535 [2024-07-14 10:38:23.379532] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:38.535 [2024-07-14 10:38:23.379539] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:29:38.535 [2024-07-14 10:38:23.379546] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:29:38.535 [2024-07-14 10:38:23.379552] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:38.535 [2024-07-14 10:38:23.379556] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:38.535 [2024-07-14 10:38:23.379561] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:29:38.535 [2024-07-14 10:38:23.379565] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:29:38.535 [2024-07-14 10:38:23.379569] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:29:38.535 [2024-07-14 10:38:23.379573] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:29:38.535 [2024-07-14 10:38:23.379585] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.379589] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2067af0) 00:29:38.535 [2024-07-14 10:38:23.379594] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.535 [2024-07-14 10:38:23.379600] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.379603] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.379606] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2067af0) 00:29:38.535 [2024-07-14 10:38:23.379611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.535 [2024-07-14 10:38:23.379622] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4940, cid 4, qid 0 00:29:38.535 [2024-07-14 10:38:23.379627] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4ac0, cid 5, qid 0 00:29:38.535 [2024-07-14 10:38:23.379705] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.535 [2024-07-14 10:38:23.379712] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.535 [2024-07-14 10:38:23.379715] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.379718] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d4940) on tqpair=0x2067af0 00:29:38.535 [2024-07-14 10:38:23.379724] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.535 [2024-07-14 10:38:23.379729] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.535 [2024-07-14 10:38:23.379732] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.379735] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d4ac0) on tqpair=0x2067af0 00:29:38.535 [2024-07-14 10:38:23.379743] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.379746] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2067af0) 00:29:38.535 [2024-07-14 10:38:23.379752] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.535 [2024-07-14 10:38:23.379760] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4ac0, cid 5, qid 0 00:29:38.535 [2024-07-14 10:38:23.379824] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.535 [2024-07-14 10:38:23.379830] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.535 [2024-07-14 10:38:23.379834] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.379837] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d4ac0) on tqpair=0x2067af0 00:29:38.535 [2024-07-14 10:38:23.379845] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.379848] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2067af0) 00:29:38.535 [2024-07-14 10:38:23.379854] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.535 [2024-07-14 10:38:23.379862] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4ac0, cid 5, qid 0 00:29:38.535 [2024-07-14 10:38:23.379934] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.535 [2024-07-14 10:38:23.379940] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.535 [2024-07-14 10:38:23.379943] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.379946] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d4ac0) on tqpair=0x2067af0 00:29:38.535 [2024-07-14 10:38:23.379954] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.379958] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2067af0) 00:29:38.535 [2024-07-14 10:38:23.379964] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.535 [2024-07-14 10:38:23.379973] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4ac0, cid 5, qid 0 00:29:38.535 [2024-07-14 10:38:23.380038] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.535 [2024-07-14 10:38:23.380044] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.535 [2024-07-14 10:38:23.380048] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.535 [2024-07-14 10:38:23.380051] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d4ac0) on tqpair=0x2067af0 00:29:38.536 [2024-07-14 10:38:23.380064] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.536 [2024-07-14 10:38:23.380068] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2067af0) 00:29:38.536 [2024-07-14 10:38:23.380074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.536 [2024-07-14 10:38:23.380079] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.536 [2024-07-14 10:38:23.380083] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2067af0) 00:29:38.536 [2024-07-14 10:38:23.380088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.536 [2024-07-14 10:38:23.380094] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.536 [2024-07-14 10:38:23.380098] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2067af0) 00:29:38.536 [2024-07-14 10:38:23.380103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.536 [2024-07-14 10:38:23.380109] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.536 [2024-07-14 10:38:23.380112] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2067af0) 00:29:38.536 [2024-07-14 10:38:23.380117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.536 [2024-07-14 10:38:23.380128] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4ac0, cid 5, qid 0 00:29:38.536 [2024-07-14 10:38:23.380133] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4940, cid 4, qid 0 00:29:38.536 [2024-07-14 10:38:23.380137] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4c40, cid 6, qid 0 00:29:38.536 [2024-07-14 10:38:23.380142] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4dc0, cid 7, qid 0 00:29:38.536 [2024-07-14 10:38:23.384241] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:38.536 [2024-07-14 10:38:23.384249] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:38.536 [2024-07-14 10:38:23.384252] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:38.536 [2024-07-14 10:38:23.384256] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2067af0): datao=0, datal=8192, cccid=5 00:29:38.536 [2024-07-14 10:38:23.384259] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20d4ac0) on tqpair(0x2067af0): expected_datao=0, payload_size=8192 00:29:38.536 [2024-07-14 10:38:23.384263] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.536 [2024-07-14 10:38:23.384269] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:38.536 [2024-07-14 10:38:23.384272] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:38.536 [2024-07-14 10:38:23.384277] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:38.536 [2024-07-14 10:38:23.384282] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:38.536 [2024-07-14 10:38:23.384285] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:38.536 [2024-07-14 10:38:23.384288] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2067af0): datao=0, datal=512, cccid=4 00:29:38.536 [2024-07-14 10:38:23.384292] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20d4940) on tqpair(0x2067af0): expected_datao=0, payload_size=512 00:29:38.536 [2024-07-14 10:38:23.384296] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.536 [2024-07-14 10:38:23.384301] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:38.536 [2024-07-14 10:38:23.384304] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:38.536 [2024-07-14 10:38:23.384308] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:38.536 [2024-07-14 10:38:23.384313] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:38.536 [2024-07-14 10:38:23.384316] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:38.536 [2024-07-14 10:38:23.384319] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2067af0): datao=0, datal=512, cccid=6 00:29:38.536 [2024-07-14 10:38:23.384323] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20d4c40) on tqpair(0x2067af0): expected_datao=0, payload_size=512 00:29:38.536 [2024-07-14 10:38:23.384327] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.536 [2024-07-14 10:38:23.384332] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:38.536 [2024-07-14 10:38:23.384341] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:38.536 [2024-07-14 10:38:23.384346] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:38.536 [2024-07-14 10:38:23.384351] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:38.536 [2024-07-14 10:38:23.384354] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:38.536 [2024-07-14 10:38:23.384357] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2067af0): datao=0, datal=4096, cccid=7 00:29:38.536 [2024-07-14 10:38:23.384361] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20d4dc0) on tqpair(0x2067af0): expected_datao=0, payload_size=4096 00:29:38.536 [2024-07-14 10:38:23.384364] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.536 [2024-07-14 10:38:23.384369] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:38.536 [2024-07-14 10:38:23.384373] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:38.536 [2024-07-14 10:38:23.384377] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.536 [2024-07-14 10:38:23.384382] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.536 [2024-07-14 10:38:23.384385] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.536 [2024-07-14 10:38:23.384388] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d4ac0) on tqpair=0x2067af0 00:29:38.536 [2024-07-14 10:38:23.384399] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.536 [2024-07-14 10:38:23.384406] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.536 [2024-07-14 10:38:23.384409] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.536 [2024-07-14 10:38:23.384412] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d4940) on tqpair=0x2067af0 00:29:38.536 [2024-07-14 10:38:23.384421] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.536 [2024-07-14 10:38:23.384426] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.536 [2024-07-14 10:38:23.384429] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.536 [2024-07-14 10:38:23.384432] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d4c40) on tqpair=0x2067af0 00:29:38.536 [2024-07-14 10:38:23.384438] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.536 [2024-07-14 10:38:23.384443] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.536 [2024-07-14 10:38:23.384445] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.536 [2024-07-14 10:38:23.384449] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d4dc0) on tqpair=0x2067af0 00:29:38.536 ===================================================== 00:29:38.536 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:38.536 ===================================================== 00:29:38.536 Controller Capabilities/Features 00:29:38.536 ================================ 00:29:38.536 Vendor ID: 8086 00:29:38.536 Subsystem Vendor ID: 8086 00:29:38.536 Serial Number: SPDK00000000000001 00:29:38.536 Model Number: SPDK bdev Controller 00:29:38.536 Firmware Version: 24.09 00:29:38.536 Recommended Arb Burst: 6 00:29:38.536 IEEE OUI Identifier: e4 d2 5c 00:29:38.536 Multi-path I/O 00:29:38.536 May have multiple subsystem ports: Yes 00:29:38.536 May have multiple controllers: Yes 00:29:38.536 Associated with SR-IOV VF: No 00:29:38.536 Max Data Transfer Size: 131072 00:29:38.536 Max Number of Namespaces: 32 00:29:38.536 Max Number of I/O Queues: 127 00:29:38.536 NVMe Specification Version (VS): 1.3 00:29:38.536 NVMe Specification Version (Identify): 1.3 00:29:38.536 Maximum Queue Entries: 128 00:29:38.536 Contiguous Queues Required: Yes 00:29:38.536 Arbitration Mechanisms Supported 00:29:38.536 Weighted Round Robin: Not Supported 00:29:38.536 Vendor Specific: Not Supported 00:29:38.536 Reset Timeout: 15000 ms 00:29:38.536 Doorbell Stride: 4 bytes 00:29:38.536 NVM Subsystem Reset: Not Supported 00:29:38.536 Command Sets Supported 00:29:38.536 NVM Command Set: Supported 00:29:38.536 Boot Partition: Not Supported 00:29:38.536 Memory Page Size Minimum: 4096 bytes 00:29:38.536 Memory Page Size Maximum: 4096 bytes 00:29:38.536 Persistent Memory Region: Not Supported 00:29:38.536 Optional Asynchronous Events Supported 00:29:38.536 Namespace Attribute Notices: Supported 00:29:38.536 Firmware Activation Notices: Not Supported 00:29:38.536 ANA Change Notices: Not Supported 00:29:38.536 PLE Aggregate Log Change Notices: Not Supported 00:29:38.536 LBA Status Info Alert Notices: Not Supported 00:29:38.536 EGE Aggregate Log Change Notices: Not Supported 00:29:38.536 Normal NVM Subsystem Shutdown event: Not Supported 00:29:38.536 Zone Descriptor Change Notices: Not Supported 00:29:38.536 Discovery Log Change Notices: Not Supported 00:29:38.536 Controller Attributes 00:29:38.536 128-bit Host Identifier: Supported 00:29:38.536 Non-Operational Permissive Mode: Not Supported 00:29:38.536 NVM Sets: Not Supported 00:29:38.536 Read Recovery Levels: Not Supported 00:29:38.536 Endurance Groups: Not Supported 00:29:38.536 Predictable Latency Mode: Not Supported 00:29:38.536 Traffic Based Keep ALive: Not Supported 00:29:38.536 Namespace Granularity: Not Supported 00:29:38.536 SQ Associations: Not Supported 00:29:38.536 UUID List: Not Supported 00:29:38.536 Multi-Domain Subsystem: Not Supported 00:29:38.536 Fixed Capacity Management: Not Supported 00:29:38.536 Variable Capacity Management: Not Supported 00:29:38.536 Delete Endurance Group: Not Supported 00:29:38.536 Delete NVM Set: Not Supported 00:29:38.536 Extended LBA Formats Supported: Not Supported 00:29:38.536 Flexible Data Placement Supported: Not Supported 00:29:38.536 00:29:38.536 Controller Memory Buffer Support 00:29:38.536 ================================ 00:29:38.536 Supported: No 00:29:38.536 00:29:38.536 Persistent Memory Region Support 00:29:38.536 ================================ 00:29:38.536 Supported: No 00:29:38.536 00:29:38.536 Admin Command Set Attributes 00:29:38.536 ============================ 00:29:38.536 Security Send/Receive: Not Supported 00:29:38.536 Format NVM: Not Supported 00:29:38.536 Firmware Activate/Download: Not Supported 00:29:38.536 Namespace Management: Not Supported 00:29:38.536 Device Self-Test: Not Supported 00:29:38.536 Directives: Not Supported 00:29:38.536 NVMe-MI: Not Supported 00:29:38.536 Virtualization Management: Not Supported 00:29:38.536 Doorbell Buffer Config: Not Supported 00:29:38.537 Get LBA Status Capability: Not Supported 00:29:38.537 Command & Feature Lockdown Capability: Not Supported 00:29:38.537 Abort Command Limit: 4 00:29:38.537 Async Event Request Limit: 4 00:29:38.537 Number of Firmware Slots: N/A 00:29:38.537 Firmware Slot 1 Read-Only: N/A 00:29:38.537 Firmware Activation Without Reset: N/A 00:29:38.537 Multiple Update Detection Support: N/A 00:29:38.537 Firmware Update Granularity: No Information Provided 00:29:38.537 Per-Namespace SMART Log: No 00:29:38.537 Asymmetric Namespace Access Log Page: Not Supported 00:29:38.537 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:38.537 Command Effects Log Page: Supported 00:29:38.537 Get Log Page Extended Data: Supported 00:29:38.537 Telemetry Log Pages: Not Supported 00:29:38.537 Persistent Event Log Pages: Not Supported 00:29:38.537 Supported Log Pages Log Page: May Support 00:29:38.537 Commands Supported & Effects Log Page: Not Supported 00:29:38.537 Feature Identifiers & Effects Log Page:May Support 00:29:38.537 NVMe-MI Commands & Effects Log Page: May Support 00:29:38.537 Data Area 4 for Telemetry Log: Not Supported 00:29:38.537 Error Log Page Entries Supported: 128 00:29:38.537 Keep Alive: Supported 00:29:38.537 Keep Alive Granularity: 10000 ms 00:29:38.537 00:29:38.537 NVM Command Set Attributes 00:29:38.537 ========================== 00:29:38.537 Submission Queue Entry Size 00:29:38.537 Max: 64 00:29:38.537 Min: 64 00:29:38.537 Completion Queue Entry Size 00:29:38.537 Max: 16 00:29:38.537 Min: 16 00:29:38.537 Number of Namespaces: 32 00:29:38.537 Compare Command: Supported 00:29:38.537 Write Uncorrectable Command: Not Supported 00:29:38.537 Dataset Management Command: Supported 00:29:38.537 Write Zeroes Command: Supported 00:29:38.537 Set Features Save Field: Not Supported 00:29:38.537 Reservations: Supported 00:29:38.537 Timestamp: Not Supported 00:29:38.537 Copy: Supported 00:29:38.537 Volatile Write Cache: Present 00:29:38.537 Atomic Write Unit (Normal): 1 00:29:38.537 Atomic Write Unit (PFail): 1 00:29:38.537 Atomic Compare & Write Unit: 1 00:29:38.537 Fused Compare & Write: Supported 00:29:38.537 Scatter-Gather List 00:29:38.537 SGL Command Set: Supported 00:29:38.537 SGL Keyed: Supported 00:29:38.537 SGL Bit Bucket Descriptor: Not Supported 00:29:38.537 SGL Metadata Pointer: Not Supported 00:29:38.537 Oversized SGL: Not Supported 00:29:38.537 SGL Metadata Address: Not Supported 00:29:38.537 SGL Offset: Supported 00:29:38.537 Transport SGL Data Block: Not Supported 00:29:38.537 Replay Protected Memory Block: Not Supported 00:29:38.537 00:29:38.537 Firmware Slot Information 00:29:38.537 ========================= 00:29:38.537 Active slot: 1 00:29:38.537 Slot 1 Firmware Revision: 24.09 00:29:38.537 00:29:38.537 00:29:38.537 Commands Supported and Effects 00:29:38.537 ============================== 00:29:38.537 Admin Commands 00:29:38.537 -------------- 00:29:38.537 Get Log Page (02h): Supported 00:29:38.537 Identify (06h): Supported 00:29:38.537 Abort (08h): Supported 00:29:38.537 Set Features (09h): Supported 00:29:38.537 Get Features (0Ah): Supported 00:29:38.537 Asynchronous Event Request (0Ch): Supported 00:29:38.537 Keep Alive (18h): Supported 00:29:38.537 I/O Commands 00:29:38.537 ------------ 00:29:38.537 Flush (00h): Supported LBA-Change 00:29:38.537 Write (01h): Supported LBA-Change 00:29:38.537 Read (02h): Supported 00:29:38.537 Compare (05h): Supported 00:29:38.537 Write Zeroes (08h): Supported LBA-Change 00:29:38.537 Dataset Management (09h): Supported LBA-Change 00:29:38.537 Copy (19h): Supported LBA-Change 00:29:38.537 00:29:38.537 Error Log 00:29:38.537 ========= 00:29:38.537 00:29:38.537 Arbitration 00:29:38.537 =========== 00:29:38.537 Arbitration Burst: 1 00:29:38.537 00:29:38.537 Power Management 00:29:38.537 ================ 00:29:38.537 Number of Power States: 1 00:29:38.537 Current Power State: Power State #0 00:29:38.537 Power State #0: 00:29:38.537 Max Power: 0.00 W 00:29:38.537 Non-Operational State: Operational 00:29:38.537 Entry Latency: Not Reported 00:29:38.537 Exit Latency: Not Reported 00:29:38.537 Relative Read Throughput: 0 00:29:38.537 Relative Read Latency: 0 00:29:38.537 Relative Write Throughput: 0 00:29:38.537 Relative Write Latency: 0 00:29:38.537 Idle Power: Not Reported 00:29:38.537 Active Power: Not Reported 00:29:38.537 Non-Operational Permissive Mode: Not Supported 00:29:38.537 00:29:38.537 Health Information 00:29:38.537 ================== 00:29:38.537 Critical Warnings: 00:29:38.537 Available Spare Space: OK 00:29:38.537 Temperature: OK 00:29:38.537 Device Reliability: OK 00:29:38.537 Read Only: No 00:29:38.537 Volatile Memory Backup: OK 00:29:38.537 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:38.537 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:38.537 Available Spare: 0% 00:29:38.537 Available Spare Threshold: 0% 00:29:38.537 Life Percentage Used:[2024-07-14 10:38:23.384534] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.537 [2024-07-14 10:38:23.384539] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2067af0) 00:29:38.537 [2024-07-14 10:38:23.384545] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.537 [2024-07-14 10:38:23.384558] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4dc0, cid 7, qid 0 00:29:38.537 [2024-07-14 10:38:23.384713] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.537 [2024-07-14 10:38:23.384719] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.537 [2024-07-14 10:38:23.384722] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.537 [2024-07-14 10:38:23.384725] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d4dc0) on tqpair=0x2067af0 00:29:38.537 [2024-07-14 10:38:23.384754] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:29:38.537 [2024-07-14 10:38:23.384763] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d4340) on tqpair=0x2067af0 00:29:38.537 [2024-07-14 10:38:23.384769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.537 [2024-07-14 10:38:23.384773] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d44c0) on tqpair=0x2067af0 00:29:38.537 [2024-07-14 10:38:23.384777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.537 [2024-07-14 10:38:23.384782] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d4640) on tqpair=0x2067af0 00:29:38.537 [2024-07-14 10:38:23.384785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.537 [2024-07-14 10:38:23.384790] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.537 [2024-07-14 10:38:23.384794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.537 [2024-07-14 10:38:23.384800] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.537 [2024-07-14 10:38:23.384804] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.537 [2024-07-14 10:38:23.384807] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.537 [2024-07-14 10:38:23.384813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.537 [2024-07-14 10:38:23.384824] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.537 [2024-07-14 10:38:23.384897] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.537 [2024-07-14 10:38:23.384902] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.537 [2024-07-14 10:38:23.384907] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.537 [2024-07-14 10:38:23.384910] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.537 [2024-07-14 10:38:23.384916] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.537 [2024-07-14 10:38:23.384919] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.537 [2024-07-14 10:38:23.384921] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.537 [2024-07-14 10:38:23.384927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.537 [2024-07-14 10:38:23.384940] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.537 [2024-07-14 10:38:23.385014] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.537 [2024-07-14 10:38:23.385019] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.537 [2024-07-14 10:38:23.385022] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.537 [2024-07-14 10:38:23.385025] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.537 [2024-07-14 10:38:23.385030] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:29:38.537 [2024-07-14 10:38:23.385035] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:29:38.537 [2024-07-14 10:38:23.385043] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.537 [2024-07-14 10:38:23.385046] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.537 [2024-07-14 10:38:23.385049] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.537 [2024-07-14 10:38:23.385055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.537 [2024-07-14 10:38:23.385064] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.537 [2024-07-14 10:38:23.385135] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.537 [2024-07-14 10:38:23.385141] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.537 [2024-07-14 10:38:23.385144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.537 [2024-07-14 10:38:23.385147] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.537 [2024-07-14 10:38:23.385157] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.385160] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.385163] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.538 [2024-07-14 10:38:23.385169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.538 [2024-07-14 10:38:23.385178] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.538 [2024-07-14 10:38:23.385251] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.538 [2024-07-14 10:38:23.385257] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.538 [2024-07-14 10:38:23.385260] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.385263] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.538 [2024-07-14 10:38:23.385271] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.385275] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.385278] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.538 [2024-07-14 10:38:23.385283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.538 [2024-07-14 10:38:23.385292] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.538 [2024-07-14 10:38:23.385354] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.538 [2024-07-14 10:38:23.385360] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.538 [2024-07-14 10:38:23.385363] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.385366] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.538 [2024-07-14 10:38:23.385373] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.385377] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.385380] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.538 [2024-07-14 10:38:23.385386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.538 [2024-07-14 10:38:23.385395] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.538 [2024-07-14 10:38:23.385459] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.538 [2024-07-14 10:38:23.385464] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.538 [2024-07-14 10:38:23.385467] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.385471] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.538 [2024-07-14 10:38:23.385479] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.385483] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.385486] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.538 [2024-07-14 10:38:23.385491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.538 [2024-07-14 10:38:23.385500] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.538 [2024-07-14 10:38:23.385572] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.538 [2024-07-14 10:38:23.385578] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.538 [2024-07-14 10:38:23.385580] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.385584] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.538 [2024-07-14 10:38:23.385592] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.385596] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.385599] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.538 [2024-07-14 10:38:23.385604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.538 [2024-07-14 10:38:23.385614] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.538 [2024-07-14 10:38:23.385674] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.538 [2024-07-14 10:38:23.385679] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.538 [2024-07-14 10:38:23.385682] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.385685] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.538 [2024-07-14 10:38:23.385693] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.385696] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.385699] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.538 [2024-07-14 10:38:23.385705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.538 [2024-07-14 10:38:23.385713] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.538 [2024-07-14 10:38:23.385778] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.538 [2024-07-14 10:38:23.385784] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.538 [2024-07-14 10:38:23.385789] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.385795] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.538 [2024-07-14 10:38:23.385806] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.385810] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.385814] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.538 [2024-07-14 10:38:23.385821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.538 [2024-07-14 10:38:23.385832] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.538 [2024-07-14 10:38:23.385896] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.538 [2024-07-14 10:38:23.385902] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.538 [2024-07-14 10:38:23.385905] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.385909] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.538 [2024-07-14 10:38:23.385917] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.385920] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.385923] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.538 [2024-07-14 10:38:23.385929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.538 [2024-07-14 10:38:23.385938] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.538 [2024-07-14 10:38:23.386002] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.538 [2024-07-14 10:38:23.386008] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.538 [2024-07-14 10:38:23.386011] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.386014] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.538 [2024-07-14 10:38:23.386022] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.386025] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.386028] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.538 [2024-07-14 10:38:23.386034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.538 [2024-07-14 10:38:23.386042] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.538 [2024-07-14 10:38:23.386101] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.538 [2024-07-14 10:38:23.386107] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.538 [2024-07-14 10:38:23.386110] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.386113] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.538 [2024-07-14 10:38:23.386120] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.386124] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.386127] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.538 [2024-07-14 10:38:23.386133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.538 [2024-07-14 10:38:23.386143] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.538 [2024-07-14 10:38:23.386207] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.538 [2024-07-14 10:38:23.386212] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.538 [2024-07-14 10:38:23.386215] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.386219] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.538 [2024-07-14 10:38:23.386233] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.386237] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.386240] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.538 [2024-07-14 10:38:23.386245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.538 [2024-07-14 10:38:23.386254] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.538 [2024-07-14 10:38:23.386323] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.538 [2024-07-14 10:38:23.386330] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.538 [2024-07-14 10:38:23.386333] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.538 [2024-07-14 10:38:23.386337] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.538 [2024-07-14 10:38:23.386345] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.386349] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.386352] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.539 [2024-07-14 10:38:23.386358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.539 [2024-07-14 10:38:23.386367] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.539 [2024-07-14 10:38:23.386434] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.539 [2024-07-14 10:38:23.386441] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.539 [2024-07-14 10:38:23.386445] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.386448] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.539 [2024-07-14 10:38:23.386456] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.386459] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.386462] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.539 [2024-07-14 10:38:23.386467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.539 [2024-07-14 10:38:23.386479] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.539 [2024-07-14 10:38:23.386541] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.539 [2024-07-14 10:38:23.386547] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.539 [2024-07-14 10:38:23.386549] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.386553] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.539 [2024-07-14 10:38:23.386562] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.386566] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.386569] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.539 [2024-07-14 10:38:23.386576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.539 [2024-07-14 10:38:23.386585] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.539 [2024-07-14 10:38:23.386647] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.539 [2024-07-14 10:38:23.386653] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.539 [2024-07-14 10:38:23.386656] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.386659] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.539 [2024-07-14 10:38:23.386668] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.386672] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.386676] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.539 [2024-07-14 10:38:23.386682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.539 [2024-07-14 10:38:23.386692] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.539 [2024-07-14 10:38:23.386753] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.539 [2024-07-14 10:38:23.386759] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.539 [2024-07-14 10:38:23.386762] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.386766] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.539 [2024-07-14 10:38:23.386773] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.386777] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.386780] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.539 [2024-07-14 10:38:23.386785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.539 [2024-07-14 10:38:23.386794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.539 [2024-07-14 10:38:23.386857] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.539 [2024-07-14 10:38:23.386866] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.539 [2024-07-14 10:38:23.386869] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.386872] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.539 [2024-07-14 10:38:23.386879] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.386883] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.386886] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.539 [2024-07-14 10:38:23.386892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.539 [2024-07-14 10:38:23.386901] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.539 [2024-07-14 10:38:23.386966] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.539 [2024-07-14 10:38:23.386972] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.539 [2024-07-14 10:38:23.386975] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.386978] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.539 [2024-07-14 10:38:23.386987] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.386990] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.386993] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.539 [2024-07-14 10:38:23.386999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.539 [2024-07-14 10:38:23.387007] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.539 [2024-07-14 10:38:23.387070] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.539 [2024-07-14 10:38:23.387076] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.539 [2024-07-14 10:38:23.387079] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.387082] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.539 [2024-07-14 10:38:23.387090] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.387095] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.387098] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.539 [2024-07-14 10:38:23.387104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.539 [2024-07-14 10:38:23.387112] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.539 [2024-07-14 10:38:23.387178] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.539 [2024-07-14 10:38:23.387184] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.539 [2024-07-14 10:38:23.387187] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.387190] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.539 [2024-07-14 10:38:23.387197] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.387201] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.387204] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.539 [2024-07-14 10:38:23.387209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.539 [2024-07-14 10:38:23.387218] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.539 [2024-07-14 10:38:23.387290] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.539 [2024-07-14 10:38:23.387296] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.539 [2024-07-14 10:38:23.387299] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.387302] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.539 [2024-07-14 10:38:23.387310] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.387314] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.387317] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.539 [2024-07-14 10:38:23.387322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.539 [2024-07-14 10:38:23.387332] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.539 [2024-07-14 10:38:23.387400] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.539 [2024-07-14 10:38:23.387406] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.539 [2024-07-14 10:38:23.387408] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.387411] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.539 [2024-07-14 10:38:23.387419] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.387423] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.387426] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.539 [2024-07-14 10:38:23.387431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.539 [2024-07-14 10:38:23.387441] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.539 [2024-07-14 10:38:23.387506] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.539 [2024-07-14 10:38:23.387512] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.539 [2024-07-14 10:38:23.387514] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.387518] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.539 [2024-07-14 10:38:23.387526] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.387529] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.387533] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.539 [2024-07-14 10:38:23.387539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.539 [2024-07-14 10:38:23.387549] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.539 [2024-07-14 10:38:23.387611] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.539 [2024-07-14 10:38:23.387617] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.539 [2024-07-14 10:38:23.387620] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.387623] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.539 [2024-07-14 10:38:23.387631] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.387634] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.539 [2024-07-14 10:38:23.387638] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.539 [2024-07-14 10:38:23.387643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.539 [2024-07-14 10:38:23.387652] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.539 [2024-07-14 10:38:23.387714] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.539 [2024-07-14 10:38:23.387720] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.540 [2024-07-14 10:38:23.387723] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.540 [2024-07-14 10:38:23.387726] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.540 [2024-07-14 10:38:23.387734] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.540 [2024-07-14 10:38:23.387737] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.540 [2024-07-14 10:38:23.387740] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.540 [2024-07-14 10:38:23.387746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.540 [2024-07-14 10:38:23.387754] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.540 [2024-07-14 10:38:23.387813] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.540 [2024-07-14 10:38:23.387818] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.540 [2024-07-14 10:38:23.387821] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.540 [2024-07-14 10:38:23.387825] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.540 [2024-07-14 10:38:23.387832] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.540 [2024-07-14 10:38:23.387836] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.540 [2024-07-14 10:38:23.387839] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.540 [2024-07-14 10:38:23.387844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.540 [2024-07-14 10:38:23.387853] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.540 [2024-07-14 10:38:23.387918] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.540 [2024-07-14 10:38:23.387924] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.540 [2024-07-14 10:38:23.387927] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.540 [2024-07-14 10:38:23.387930] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.540 [2024-07-14 10:38:23.387938] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.540 [2024-07-14 10:38:23.387941] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.540 [2024-07-14 10:38:23.387944] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.540 [2024-07-14 10:38:23.387953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.540 [2024-07-14 10:38:23.387963] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.540 [2024-07-14 10:38:23.388030] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.540 [2024-07-14 10:38:23.388035] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.540 [2024-07-14 10:38:23.388038] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.540 [2024-07-14 10:38:23.388041] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.540 [2024-07-14 10:38:23.388050] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.540 [2024-07-14 10:38:23.388053] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.540 [2024-07-14 10:38:23.388056] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.540 [2024-07-14 10:38:23.388062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.540 [2024-07-14 10:38:23.388071] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.540 [2024-07-14 10:38:23.388135] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.540 [2024-07-14 10:38:23.388141] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.540 [2024-07-14 10:38:23.388144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.540 [2024-07-14 10:38:23.388147] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.540 [2024-07-14 10:38:23.388155] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.540 [2024-07-14 10:38:23.388158] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.540 [2024-07-14 10:38:23.388161] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.540 [2024-07-14 10:38:23.388167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.540 [2024-07-14 10:38:23.388175] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.540 [2024-07-14 10:38:23.392257] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.540 [2024-07-14 10:38:23.392267] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.540 [2024-07-14 10:38:23.392270] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.540 [2024-07-14 10:38:23.392273] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.540 [2024-07-14 10:38:23.392284] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.540 [2024-07-14 10:38:23.392287] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.540 [2024-07-14 10:38:23.392290] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2067af0) 00:29:38.540 [2024-07-14 10:38:23.392297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.540 [2024-07-14 10:38:23.392308] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d47c0, cid 3, qid 0 00:29:38.540 [2024-07-14 10:38:23.392374] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.540 [2024-07-14 10:38:23.392380] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.540 [2024-07-14 10:38:23.392383] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.540 [2024-07-14 10:38:23.392386] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d47c0) on tqpair=0x2067af0 00:29:38.540 [2024-07-14 10:38:23.392393] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:29:38.540 0% 00:29:38.540 Data Units Read: 0 00:29:38.540 Data Units Written: 0 00:29:38.540 Host Read Commands: 0 00:29:38.540 Host Write Commands: 0 00:29:38.540 Controller Busy Time: 0 minutes 00:29:38.540 Power Cycles: 0 00:29:38.540 Power On Hours: 0 hours 00:29:38.540 Unsafe Shutdowns: 0 00:29:38.540 Unrecoverable Media Errors: 0 00:29:38.540 Lifetime Error Log Entries: 0 00:29:38.540 Warning Temperature Time: 0 minutes 00:29:38.540 Critical Temperature Time: 0 minutes 00:29:38.540 00:29:38.540 Number of Queues 00:29:38.540 ================ 00:29:38.540 Number of I/O Submission Queues: 127 00:29:38.540 Number of I/O Completion Queues: 127 00:29:38.540 00:29:38.540 Active Namespaces 00:29:38.540 ================= 00:29:38.540 Namespace ID:1 00:29:38.540 Error Recovery Timeout: Unlimited 00:29:38.540 Command Set Identifier: NVM (00h) 00:29:38.540 Deallocate: Supported 00:29:38.540 Deallocated/Unwritten Error: Not Supported 00:29:38.540 Deallocated Read Value: Unknown 00:29:38.540 Deallocate in Write Zeroes: Not Supported 00:29:38.540 Deallocated Guard Field: 0xFFFF 00:29:38.540 Flush: Supported 00:29:38.540 Reservation: Supported 00:29:38.540 Namespace Sharing Capabilities: Multiple Controllers 00:29:38.540 Size (in LBAs): 131072 (0GiB) 00:29:38.540 Capacity (in LBAs): 131072 (0GiB) 00:29:38.540 Utilization (in LBAs): 131072 (0GiB) 00:29:38.540 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:38.540 EUI64: ABCDEF0123456789 00:29:38.540 UUID: 9d95d0b9-b432-41ac-9cfe-7822c6ec2259 00:29:38.540 Thin Provisioning: Not Supported 00:29:38.540 Per-NS Atomic Units: Yes 00:29:38.540 Atomic Boundary Size (Normal): 0 00:29:38.540 Atomic Boundary Size (PFail): 0 00:29:38.540 Atomic Boundary Offset: 0 00:29:38.540 Maximum Single Source Range Length: 65535 00:29:38.540 Maximum Copy Length: 65535 00:29:38.540 Maximum Source Range Count: 1 00:29:38.540 NGUID/EUI64 Never Reused: No 00:29:38.540 Namespace Write Protected: No 00:29:38.540 Number of LBA Formats: 1 00:29:38.540 Current LBA Format: LBA Format #00 00:29:38.540 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:38.540 00:29:38.540 10:38:23 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:38.540 10:38:23 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:38.540 10:38:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.540 10:38:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:38.540 10:38:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.540 10:38:23 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:38.540 10:38:23 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:38.540 10:38:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:38.540 10:38:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:29:38.540 10:38:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:38.540 10:38:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:29:38.540 10:38:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:38.540 10:38:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:38.540 rmmod nvme_tcp 00:29:38.540 rmmod nvme_fabrics 00:29:38.540 rmmod nvme_keyring 00:29:38.540 10:38:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:38.540 10:38:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:29:38.540 10:38:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:29:38.540 10:38:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2537630 ']' 00:29:38.540 10:38:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2537630 00:29:38.540 10:38:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 2537630 ']' 00:29:38.540 10:38:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 2537630 00:29:38.540 10:38:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:29:38.540 10:38:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:38.540 10:38:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2537630 00:29:38.799 10:38:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:38.799 10:38:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:38.799 10:38:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2537630' 00:29:38.799 killing process with pid 2537630 00:29:38.799 10:38:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 2537630 00:29:38.799 10:38:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 2537630 00:29:38.799 10:38:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:38.799 10:38:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:38.799 10:38:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:38.799 10:38:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:38.799 10:38:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:38.799 10:38:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.799 10:38:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:38.799 10:38:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.361 10:38:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:41.361 00:29:41.361 real 0m8.934s 00:29:41.361 user 0m4.982s 00:29:41.361 sys 0m4.688s 00:29:41.361 10:38:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:41.361 10:38:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:41.361 ************************************ 00:29:41.361 END TEST nvmf_identify 00:29:41.361 ************************************ 00:29:41.361 10:38:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:41.361 10:38:25 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:41.361 10:38:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:41.361 10:38:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:41.361 10:38:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:41.361 ************************************ 00:29:41.361 START TEST nvmf_perf 00:29:41.361 ************************************ 00:29:41.361 10:38:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:41.361 * Looking for test storage... 00:29:41.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:41.361 10:38:25 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:41.361 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:41.361 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:41.361 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:41.361 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:41.361 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:41.361 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:41.361 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:41.361 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:41.361 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:41.361 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:41.361 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:41.361 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:41.361 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:41.361 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:41.361 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:41.361 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:41.362 10:38:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:46.633 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:46.633 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:46.633 Found net devices under 0000:86:00.0: cvl_0_0 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:46.633 Found net devices under 0000:86:00.1: cvl_0_1 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:46.633 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:46.893 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:46.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:46.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:29:46.893 00:29:46.893 --- 10.0.0.2 ping statistics --- 00:29:46.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.893 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:29:46.893 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:46.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:46.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:29:46.893 00:29:46.893 --- 10.0.0.1 ping statistics --- 00:29:46.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.893 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:29:46.893 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:46.893 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:29:46.893 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:46.893 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:46.893 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:46.893 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:46.893 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:46.893 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:46.893 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:46.893 10:38:31 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:46.893 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:46.893 10:38:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:46.893 10:38:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:46.893 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2541170 00:29:46.893 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2541170 00:29:46.893 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:46.893 10:38:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 2541170 ']' 00:29:46.893 10:38:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.893 10:38:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:46.893 10:38:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.893 10:38:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:46.893 10:38:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:46.893 [2024-07-14 10:38:31.718669] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:29:46.893 [2024-07-14 10:38:31.718715] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:46.893 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.893 [2024-07-14 10:38:31.790705] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:46.893 [2024-07-14 10:38:31.832420] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:46.893 [2024-07-14 10:38:31.832458] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:46.893 [2024-07-14 10:38:31.832466] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:46.893 [2024-07-14 10:38:31.832473] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:46.893 [2024-07-14 10:38:31.832478] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:46.893 [2024-07-14 10:38:31.832543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:46.893 [2024-07-14 10:38:31.832648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:46.893 [2024-07-14 10:38:31.832756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.893 [2024-07-14 10:38:31.832757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:47.153 10:38:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:47.153 10:38:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:29:47.153 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:47.153 10:38:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:47.153 10:38:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:47.153 10:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.153 10:38:31 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:47.153 10:38:31 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:50.440 10:38:34 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:50.440 10:38:34 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:50.440 10:38:35 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:29:50.440 10:38:35 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:50.440 10:38:35 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:50.440 10:38:35 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:29:50.440 10:38:35 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:50.440 10:38:35 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:50.440 10:38:35 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:50.699 [2024-07-14 10:38:35.551838] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:50.699 10:38:35 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:50.957 10:38:35 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:50.957 10:38:35 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:51.216 10:38:35 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:51.216 10:38:35 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:51.216 10:38:36 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:51.474 [2024-07-14 10:38:36.278580] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:51.474 10:38:36 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:51.731 10:38:36 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:29:51.731 10:38:36 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:29:51.731 10:38:36 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:51.731 10:38:36 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:29:53.109 Initializing NVMe Controllers 00:29:53.109 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:29:53.110 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:29:53.110 Initialization complete. Launching workers. 00:29:53.110 ======================================================== 00:29:53.110 Latency(us) 00:29:53.110 Device Information : IOPS MiB/s Average min max 00:29:53.110 PCIE (0000:5e:00.0) NSID 1 from core 0: 97742.10 381.81 327.00 39.12 7206.49 00:29:53.110 ======================================================== 00:29:53.110 Total : 97742.10 381.81 327.00 39.12 7206.49 00:29:53.110 00:29:53.110 10:38:37 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:53.110 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.067 Initializing NVMe Controllers 00:29:54.067 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:54.067 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:54.067 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:54.067 Initialization complete. Launching workers. 00:29:54.067 ======================================================== 00:29:54.067 Latency(us) 00:29:54.067 Device Information : IOPS MiB/s Average min max 00:29:54.067 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 105.73 0.41 9704.26 115.72 45873.35 00:29:54.067 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 81.79 0.32 12809.53 5781.89 47883.82 00:29:54.067 ======================================================== 00:29:54.067 Total : 187.52 0.73 11058.68 115.72 47883.82 00:29:54.067 00:29:54.067 10:38:38 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:54.067 EAL: No free 2048 kB hugepages reported on node 1 00:29:55.443 Initializing NVMe Controllers 00:29:55.443 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:55.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:55.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:55.443 Initialization complete. Launching workers. 00:29:55.443 ======================================================== 00:29:55.443 Latency(us) 00:29:55.443 Device Information : IOPS MiB/s Average min max 00:29:55.443 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11057.94 43.20 2896.42 462.84 8571.86 00:29:55.443 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3815.94 14.91 8404.81 6541.60 16056.07 00:29:55.443 ======================================================== 00:29:55.443 Total : 14873.88 58.10 4309.61 462.84 16056.07 00:29:55.443 00:29:55.443 10:38:40 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:55.443 10:38:40 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:55.443 10:38:40 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:55.443 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.977 Initializing NVMe Controllers 00:29:57.977 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:57.977 Controller IO queue size 128, less than required. 00:29:57.977 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.977 Controller IO queue size 128, less than required. 00:29:57.977 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.977 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:57.977 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:57.977 Initialization complete. Launching workers. 00:29:57.977 ======================================================== 00:29:57.977 Latency(us) 00:29:57.977 Device Information : IOPS MiB/s Average min max 00:29:57.977 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1895.92 473.98 68331.59 43917.98 119132.02 00:29:57.977 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 603.52 150.88 218513.56 69887.67 361968.05 00:29:57.977 ======================================================== 00:29:57.977 Total : 2499.45 624.86 104594.82 43917.98 361968.05 00:29:57.977 00:29:57.977 10:38:42 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:57.977 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.977 No valid NVMe controllers or AIO or URING devices found 00:29:57.977 Initializing NVMe Controllers 00:29:57.977 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:57.977 Controller IO queue size 128, less than required. 00:29:57.977 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.977 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:57.977 Controller IO queue size 128, less than required. 00:29:57.977 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.977 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:57.977 WARNING: Some requested NVMe devices were skipped 00:29:57.977 10:38:42 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:57.977 EAL: No free 2048 kB hugepages reported on node 1 00:30:01.264 Initializing NVMe Controllers 00:30:01.264 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:01.264 Controller IO queue size 128, less than required. 00:30:01.264 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:01.264 Controller IO queue size 128, less than required. 00:30:01.264 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:01.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:01.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:01.264 Initialization complete. Launching workers. 00:30:01.264 00:30:01.264 ==================== 00:30:01.264 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:01.264 TCP transport: 00:30:01.264 polls: 18578 00:30:01.264 idle_polls: 13942 00:30:01.264 sock_completions: 4636 00:30:01.264 nvme_completions: 6503 00:30:01.264 submitted_requests: 9802 00:30:01.264 queued_requests: 1 00:30:01.264 00:30:01.264 ==================== 00:30:01.264 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:01.264 TCP transport: 00:30:01.264 polls: 18420 00:30:01.264 idle_polls: 12632 00:30:01.264 sock_completions: 5788 00:30:01.264 nvme_completions: 7315 00:30:01.264 submitted_requests: 11008 00:30:01.264 queued_requests: 1 00:30:01.264 ======================================================== 00:30:01.264 Latency(us) 00:30:01.264 Device Information : IOPS MiB/s Average min max 00:30:01.264 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1625.37 406.34 80691.92 45514.63 141782.81 00:30:01.264 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1828.36 457.09 70421.59 31835.56 106498.89 00:30:01.264 ======================================================== 00:30:01.264 Total : 3453.73 863.43 75254.95 31835.56 141782.81 00:30:01.264 00:30:01.264 10:38:45 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:01.264 10:38:45 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:01.264 10:38:45 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:01.264 10:38:45 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:30:01.264 10:38:45 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:04.554 10:38:48 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=25f5925b-9b22-4bf6-b31d-0808f661b2d1 00:30:04.555 10:38:48 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 25f5925b-9b22-4bf6-b31d-0808f661b2d1 00:30:04.555 10:38:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=25f5925b-9b22-4bf6-b31d-0808f661b2d1 00:30:04.555 10:38:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:04.555 10:38:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:30:04.555 10:38:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:30:04.555 10:38:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:04.555 10:38:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:04.555 { 00:30:04.555 "uuid": "25f5925b-9b22-4bf6-b31d-0808f661b2d1", 00:30:04.555 "name": "lvs_0", 00:30:04.555 "base_bdev": "Nvme0n1", 00:30:04.555 "total_data_clusters": 238234, 00:30:04.555 "free_clusters": 238234, 00:30:04.555 "block_size": 512, 00:30:04.555 "cluster_size": 4194304 00:30:04.555 } 00:30:04.555 ]' 00:30:04.555 10:38:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="25f5925b-9b22-4bf6-b31d-0808f661b2d1") .free_clusters' 00:30:04.555 10:38:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:30:04.555 10:38:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="25f5925b-9b22-4bf6-b31d-0808f661b2d1") .cluster_size' 00:30:04.555 10:38:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:04.555 10:38:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:30:04.555 10:38:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:30:04.555 952936 00:30:04.555 10:38:49 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:04.555 10:38:49 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:04.555 10:38:49 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 25f5925b-9b22-4bf6-b31d-0808f661b2d1 lbd_0 20480 00:30:04.815 10:38:49 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=c16bf022-0221-4635-aaa8-a2112372b18f 00:30:04.815 10:38:49 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore c16bf022-0221-4635-aaa8-a2112372b18f lvs_n_0 00:30:05.384 10:38:50 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=62c2c273-8883-4ced-8bd5-d15d11a07ff9 00:30:05.384 10:38:50 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 62c2c273-8883-4ced-8bd5-d15d11a07ff9 00:30:05.384 10:38:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=62c2c273-8883-4ced-8bd5-d15d11a07ff9 00:30:05.384 10:38:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:05.384 10:38:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:30:05.384 10:38:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:30:05.384 10:38:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:05.643 10:38:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:05.643 { 00:30:05.643 "uuid": "25f5925b-9b22-4bf6-b31d-0808f661b2d1", 00:30:05.643 "name": "lvs_0", 00:30:05.643 "base_bdev": "Nvme0n1", 00:30:05.643 "total_data_clusters": 238234, 00:30:05.643 "free_clusters": 233114, 00:30:05.643 "block_size": 512, 00:30:05.643 "cluster_size": 4194304 00:30:05.643 }, 00:30:05.643 { 00:30:05.643 "uuid": "62c2c273-8883-4ced-8bd5-d15d11a07ff9", 00:30:05.643 "name": "lvs_n_0", 00:30:05.643 "base_bdev": "c16bf022-0221-4635-aaa8-a2112372b18f", 00:30:05.643 "total_data_clusters": 5114, 00:30:05.643 "free_clusters": 5114, 00:30:05.643 "block_size": 512, 00:30:05.643 "cluster_size": 4194304 00:30:05.643 } 00:30:05.643 ]' 00:30:05.643 10:38:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="62c2c273-8883-4ced-8bd5-d15d11a07ff9") .free_clusters' 00:30:05.643 10:38:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:30:05.643 10:38:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="62c2c273-8883-4ced-8bd5-d15d11a07ff9") .cluster_size' 00:30:05.643 10:38:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:05.643 10:38:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:30:05.644 10:38:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:30:05.644 20456 00:30:05.644 10:38:50 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:05.644 10:38:50 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 62c2c273-8883-4ced-8bd5-d15d11a07ff9 lbd_nest_0 20456 00:30:05.903 10:38:50 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=7fbb3365-a464-4576-8d98-d095478b89e2 00:30:05.903 10:38:50 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:06.163 10:38:50 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:06.164 10:38:50 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 7fbb3365-a464-4576-8d98-d095478b89e2 00:30:06.424 10:38:51 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:06.424 10:38:51 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:06.424 10:38:51 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:06.424 10:38:51 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:06.424 10:38:51 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:06.424 10:38:51 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:06.424 EAL: No free 2048 kB hugepages reported on node 1 00:30:18.686 Initializing NVMe Controllers 00:30:18.686 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:18.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:18.686 Initialization complete. Launching workers. 00:30:18.686 ======================================================== 00:30:18.686 Latency(us) 00:30:18.686 Device Information : IOPS MiB/s Average min max 00:30:18.686 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.40 0.02 21621.33 136.03 45706.99 00:30:18.686 ======================================================== 00:30:18.686 Total : 46.40 0.02 21621.33 136.03 45706.99 00:30:18.686 00:30:18.686 10:39:01 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:18.686 10:39:01 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:18.686 EAL: No free 2048 kB hugepages reported on node 1 00:30:28.659 Initializing NVMe Controllers 00:30:28.659 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:28.659 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:28.659 Initialization complete. Launching workers. 00:30:28.659 ======================================================== 00:30:28.659 Latency(us) 00:30:28.659 Device Information : IOPS MiB/s Average min max 00:30:28.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.90 9.99 12515.93 3990.42 47884.53 00:30:28.659 ======================================================== 00:30:28.659 Total : 79.90 9.99 12515.93 3990.42 47884.53 00:30:28.659 00:30:28.659 10:39:11 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:28.659 10:39:11 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:28.659 10:39:11 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:28.659 EAL: No free 2048 kB hugepages reported on node 1 00:30:38.635 Initializing NVMe Controllers 00:30:38.635 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:38.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:38.635 Initialization complete. Launching workers. 00:30:38.635 ======================================================== 00:30:38.635 Latency(us) 00:30:38.635 Device Information : IOPS MiB/s Average min max 00:30:38.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8518.59 4.16 3756.54 224.42 10545.06 00:30:38.635 ======================================================== 00:30:38.635 Total : 8518.59 4.16 3756.54 224.42 10545.06 00:30:38.635 00:30:38.635 10:39:22 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:38.635 10:39:22 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:38.635 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.613 Initializing NVMe Controllers 00:30:48.613 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:48.613 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:48.613 Initialization complete. Launching workers. 00:30:48.613 ======================================================== 00:30:48.613 Latency(us) 00:30:48.613 Device Information : IOPS MiB/s Average min max 00:30:48.613 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3980.49 497.56 8039.54 778.16 20681.00 00:30:48.613 ======================================================== 00:30:48.613 Total : 3980.49 497.56 8039.54 778.16 20681.00 00:30:48.613 00:30:48.613 10:39:32 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:48.613 10:39:32 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:48.613 10:39:32 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:48.613 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.588 Initializing NVMe Controllers 00:30:58.588 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:58.588 Controller IO queue size 128, less than required. 00:30:58.588 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:58.588 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:58.588 Initialization complete. Launching workers. 00:30:58.588 ======================================================== 00:30:58.588 Latency(us) 00:30:58.588 Device Information : IOPS MiB/s Average min max 00:30:58.588 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15850.97 7.74 8078.02 1420.79 22598.22 00:30:58.588 ======================================================== 00:30:58.588 Total : 15850.97 7.74 8078.02 1420.79 22598.22 00:30:58.588 00:30:58.588 10:39:42 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:58.588 10:39:42 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:58.588 EAL: No free 2048 kB hugepages reported on node 1 00:31:08.604 Initializing NVMe Controllers 00:31:08.604 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:08.604 Controller IO queue size 128, less than required. 00:31:08.604 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:08.604 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:08.604 Initialization complete. Launching workers. 00:31:08.604 ======================================================== 00:31:08.604 Latency(us) 00:31:08.604 Device Information : IOPS MiB/s Average min max 00:31:08.604 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1202.40 150.30 107022.54 23421.07 207308.21 00:31:08.604 ======================================================== 00:31:08.604 Total : 1202.40 150.30 107022.54 23421.07 207308.21 00:31:08.604 00:31:08.604 10:39:53 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:08.604 10:39:53 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7fbb3365-a464-4576-8d98-d095478b89e2 00:31:09.171 10:39:54 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:09.429 10:39:54 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c16bf022-0221-4635-aaa8-a2112372b18f 00:31:09.686 10:39:54 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:09.945 10:39:54 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:09.945 10:39:54 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:09.945 10:39:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:09.945 10:39:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:31:09.945 10:39:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:09.945 10:39:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:31:09.945 10:39:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:09.945 10:39:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:09.945 rmmod nvme_tcp 00:31:09.945 rmmod nvme_fabrics 00:31:09.945 rmmod nvme_keyring 00:31:09.945 10:39:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:09.945 10:39:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:31:09.945 10:39:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:31:09.945 10:39:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2541170 ']' 00:31:09.945 10:39:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2541170 00:31:09.945 10:39:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 2541170 ']' 00:31:09.945 10:39:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 2541170 00:31:09.945 10:39:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:31:09.945 10:39:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:09.945 10:39:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2541170 00:31:09.945 10:39:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:09.945 10:39:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:09.945 10:39:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2541170' 00:31:09.945 killing process with pid 2541170 00:31:09.945 10:39:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 2541170 00:31:09.945 10:39:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 2541170 00:31:11.324 10:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:11.324 10:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:11.324 10:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:11.324 10:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:11.324 10:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:11.324 10:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.324 10:39:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:11.324 10:39:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:13.858 10:39:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:13.858 00:31:13.858 real 1m32.510s 00:31:13.858 user 5m31.090s 00:31:13.858 sys 0m15.834s 00:31:13.858 10:39:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:13.858 10:39:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:13.858 ************************************ 00:31:13.858 END TEST nvmf_perf 00:31:13.858 ************************************ 00:31:13.858 10:39:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:13.858 10:39:58 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:13.858 10:39:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:13.858 10:39:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:13.858 10:39:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:13.858 ************************************ 00:31:13.858 START TEST nvmf_fio_host 00:31:13.858 ************************************ 00:31:13.858 10:39:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:13.858 * Looking for test storage... 00:31:13.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:13.858 10:39:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:13.858 10:39:58 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:13.858 10:39:58 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:13.858 10:39:58 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:13.858 10:39:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.858 10:39:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.858 10:39:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.858 10:39:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:13.858 10:39:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.858 10:39:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:31:13.859 10:39:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:19.136 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:19.136 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:19.136 Found net devices under 0000:86:00.0: cvl_0_0 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:19.136 Found net devices under 0000:86:00.1: cvl_0_1 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:19.136 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:19.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:19.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:31:19.395 00:31:19.395 --- 10.0.0.2 ping statistics --- 00:31:19.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:19.395 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:19.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:19.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:31:19.395 00:31:19.395 --- 10.0.0.1 ping statistics --- 00:31:19.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:19.395 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2558654 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2558654 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 2558654 ']' 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:19.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:19.395 10:40:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.395 [2024-07-14 10:40:04.361995] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:31:19.395 [2024-07-14 10:40:04.362040] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:19.654 EAL: No free 2048 kB hugepages reported on node 1 00:31:19.654 [2024-07-14 10:40:04.431047] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:19.654 [2024-07-14 10:40:04.472596] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:19.654 [2024-07-14 10:40:04.472633] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:19.654 [2024-07-14 10:40:04.472640] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:19.654 [2024-07-14 10:40:04.472647] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:19.654 [2024-07-14 10:40:04.472652] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:19.654 [2024-07-14 10:40:04.472695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:19.654 [2024-07-14 10:40:04.472804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:19.654 [2024-07-14 10:40:04.472909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.654 [2024-07-14 10:40:04.472909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:20.220 10:40:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:20.220 10:40:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:31:20.220 10:40:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:20.478 [2024-07-14 10:40:05.324459] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:20.478 10:40:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:20.478 10:40:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:20.478 10:40:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.478 10:40:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:20.737 Malloc1 00:31:20.737 10:40:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:20.997 10:40:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:20.997 10:40:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:21.256 [2024-07-14 10:40:06.106738] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:21.256 10:40:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:21.516 10:40:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:21.516 10:40:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:21.516 10:40:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:21.516 10:40:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:21.516 10:40:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:21.516 10:40:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:21.516 10:40:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:21.516 10:40:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:21.516 10:40:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:21.516 10:40:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:21.516 10:40:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:21.516 10:40:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:21.516 10:40:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:21.516 10:40:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:21.516 10:40:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:21.516 10:40:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:21.516 10:40:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:21.516 10:40:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:21.516 10:40:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:21.516 10:40:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:21.516 10:40:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:21.516 10:40:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:21.516 10:40:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:21.775 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:21.775 fio-3.35 00:31:21.775 Starting 1 thread 00:31:21.775 EAL: No free 2048 kB hugepages reported on node 1 00:31:24.307 00:31:24.307 test: (groupid=0, jobs=1): err= 0: pid=2559142: Sun Jul 14 10:40:08 2024 00:31:24.307 read: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(92.2MiB/2046msec) 00:31:24.307 slat (nsec): min=1608, max=249635, avg=1763.17, stdev=2242.91 00:31:24.307 clat (usec): min=2829, max=51027, avg=6095.09, stdev=2018.24 00:31:24.307 lat (usec): min=2861, max=51029, avg=6096.85, stdev=2018.23 00:31:24.307 clat percentiles (usec): 00:31:24.307 | 1.00th=[ 4883], 5.00th=[ 5276], 10.00th=[ 5407], 20.00th=[ 5669], 00:31:24.307 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 5997], 60.00th=[ 6128], 00:31:24.307 | 70.00th=[ 6259], 80.00th=[ 6390], 90.00th=[ 6521], 95.00th=[ 6718], 00:31:24.307 | 99.00th=[ 6980], 99.50th=[ 7177], 99.90th=[47973], 99.95th=[49546], 00:31:24.307 | 99.99th=[50594] 00:31:24.307 bw ( KiB/s): min=46016, max=47824, per=100.00%, avg=47086.00, stdev=766.95, samples=4 00:31:24.307 iops : min=11504, max=11956, avg=11771.50, stdev=191.74, samples=4 00:31:24.307 write: IOPS=11.5k, BW=44.8MiB/s (47.0MB/s)(91.7MiB/2046msec); 0 zone resets 00:31:24.307 slat (nsec): min=1662, max=227320, avg=1840.14, stdev=1655.10 00:31:24.307 clat (usec): min=2450, max=50455, avg=4984.85, stdev=2447.57 00:31:24.307 lat (usec): min=2465, max=50457, avg=4986.69, stdev=2447.57 00:31:24.307 clat percentiles (usec): 00:31:24.307 | 1.00th=[ 3949], 5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 4555], 00:31:24.307 | 30.00th=[ 4686], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4948], 00:31:24.307 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5276], 95.00th=[ 5407], 00:31:24.307 | 99.00th=[ 5735], 99.50th=[ 5932], 99.90th=[48497], 99.95th=[49546], 00:31:24.307 | 99.99th=[50594] 00:31:24.307 bw ( KiB/s): min=46536, max=47360, per=100.00%, avg=46796.00, stdev=384.47, samples=4 00:31:24.307 iops : min=11634, max=11840, avg=11699.00, stdev=96.12, samples=4 00:31:24.307 lat (msec) : 4=0.58%, 10=99.15%, 50=0.24%, 100=0.03% 00:31:24.307 cpu : usr=72.08%, sys=26.45%, ctx=93, majf=0, minf=6 00:31:24.307 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:24.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:24.307 issued rwts: total=23595,23473,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.307 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:24.307 00:31:24.307 Run status group 0 (all jobs): 00:31:24.307 READ: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=92.2MiB (96.6MB), run=2046-2046msec 00:31:24.307 WRITE: bw=44.8MiB/s (47.0MB/s), 44.8MiB/s-44.8MiB/s (47.0MB/s-47.0MB/s), io=91.7MiB (96.1MB), run=2046-2046msec 00:31:24.307 10:40:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:24.307 10:40:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:24.307 10:40:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:24.307 10:40:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:24.307 10:40:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:24.307 10:40:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:24.307 10:40:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:24.307 10:40:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:24.307 10:40:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:24.307 10:40:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:24.307 10:40:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:24.307 10:40:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:24.307 10:40:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:24.307 10:40:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:24.307 10:40:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:24.307 10:40:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:24.307 10:40:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:24.307 10:40:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:24.307 10:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:24.307 10:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:24.307 10:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:24.308 10:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:24.308 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:24.308 fio-3.35 00:31:24.308 Starting 1 thread 00:31:24.566 EAL: No free 2048 kB hugepages reported on node 1 00:31:27.096 00:31:27.096 test: (groupid=0, jobs=1): err= 0: pid=2559712: Sun Jul 14 10:40:11 2024 00:31:27.096 read: IOPS=10.6k, BW=166MiB/s (174MB/s)(333MiB/2004msec) 00:31:27.096 slat (nsec): min=2550, max=90473, avg=2854.94, stdev=1334.75 00:31:27.096 clat (usec): min=2445, max=50421, avg=7185.98, stdev=3475.66 00:31:27.096 lat (usec): min=2448, max=50423, avg=7188.84, stdev=3475.71 00:31:27.096 clat percentiles (usec): 00:31:27.096 | 1.00th=[ 3752], 5.00th=[ 4359], 10.00th=[ 4817], 20.00th=[ 5473], 00:31:27.096 | 30.00th=[ 5997], 40.00th=[ 6456], 50.00th=[ 6915], 60.00th=[ 7373], 00:31:27.096 | 70.00th=[ 7832], 80.00th=[ 8356], 90.00th=[ 9110], 95.00th=[ 9896], 00:31:27.096 | 99.00th=[11863], 99.50th=[43779], 99.90th=[49021], 99.95th=[49546], 00:31:27.096 | 99.99th=[50594] 00:31:27.096 bw ( KiB/s): min=74880, max=95872, per=50.18%, avg=85296.00, stdev=9171.65, samples=4 00:31:27.096 iops : min= 4680, max= 5992, avg=5331.00, stdev=573.23, samples=4 00:31:27.096 write: IOPS=6356, BW=99.3MiB/s (104MB/s)(174MiB/1755msec); 0 zone resets 00:31:27.096 slat (usec): min=29, max=380, avg=31.94, stdev= 7.10 00:31:27.096 clat (usec): min=4027, max=15822, avg=8592.79, stdev=1543.32 00:31:27.096 lat (usec): min=4059, max=15854, avg=8624.73, stdev=1544.58 00:31:27.096 clat percentiles (usec): 00:31:27.096 | 1.00th=[ 5669], 5.00th=[ 6325], 10.00th=[ 6718], 20.00th=[ 7308], 00:31:27.096 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8848], 00:31:27.096 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[11338], 00:31:27.096 | 99.00th=[12649], 99.50th=[13435], 99.90th=[15008], 99.95th=[15533], 00:31:27.096 | 99.99th=[15795] 00:31:27.096 bw ( KiB/s): min=77984, max=99712, per=87.25%, avg=88728.00, stdev=9586.10, samples=4 00:31:27.096 iops : min= 4874, max= 6232, avg=5545.50, stdev=599.13, samples=4 00:31:27.096 lat (msec) : 4=1.50%, 10=89.25%, 20=8.86%, 50=0.38%, 100=0.02% 00:31:27.096 cpu : usr=86.02%, sys=12.58%, ctx=87, majf=0, minf=3 00:31:27.096 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:31:27.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.096 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:27.096 issued rwts: total=21290,11155,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.096 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:27.096 00:31:27.096 Run status group 0 (all jobs): 00:31:27.096 READ: bw=166MiB/s (174MB/s), 166MiB/s-166MiB/s (174MB/s-174MB/s), io=333MiB (349MB), run=2004-2004msec 00:31:27.096 WRITE: bw=99.3MiB/s (104MB/s), 99.3MiB/s-99.3MiB/s (104MB/s-104MB/s), io=174MiB (183MB), run=1755-1755msec 00:31:27.096 10:40:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:27.096 10:40:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:27.096 10:40:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:27.096 10:40:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:27.096 10:40:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:31:27.096 10:40:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:31:27.096 10:40:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:27.096 10:40:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:27.096 10:40:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:31:27.096 10:40:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:31:27.096 10:40:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:31:27.096 10:40:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:31:30.383 Nvme0n1 00:31:30.383 10:40:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:32.947 10:40:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=75381e9e-d50b-4523-aeb2-0bbf53d6412f 00:31:32.947 10:40:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 75381e9e-d50b-4523-aeb2-0bbf53d6412f 00:31:32.947 10:40:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=75381e9e-d50b-4523-aeb2-0bbf53d6412f 00:31:32.947 10:40:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:32.947 10:40:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:32.947 10:40:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:32.947 10:40:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:32.947 10:40:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:32.947 { 00:31:32.947 "uuid": "75381e9e-d50b-4523-aeb2-0bbf53d6412f", 00:31:32.947 "name": "lvs_0", 00:31:32.947 "base_bdev": "Nvme0n1", 00:31:32.947 "total_data_clusters": 930, 00:31:32.947 "free_clusters": 930, 00:31:32.947 "block_size": 512, 00:31:32.947 "cluster_size": 1073741824 00:31:32.947 } 00:31:32.947 ]' 00:31:32.947 10:40:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="75381e9e-d50b-4523-aeb2-0bbf53d6412f") .free_clusters' 00:31:32.947 10:40:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:31:32.947 10:40:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="75381e9e-d50b-4523-aeb2-0bbf53d6412f") .cluster_size' 00:31:32.947 10:40:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:31:32.947 10:40:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:31:32.947 10:40:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:31:32.947 952320 00:31:32.947 10:40:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:33.516 a6d126b4-df90-48f7-b2c2-ca378d984cc2 00:31:33.516 10:40:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:33.516 10:40:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:33.775 10:40:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:34.035 10:40:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:34.035 10:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:34.035 10:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:34.035 10:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:34.035 10:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:34.035 10:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:34.035 10:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:34.035 10:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:34.035 10:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:34.035 10:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:34.035 10:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:34.035 10:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:34.035 10:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:34.035 10:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:34.035 10:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:34.035 10:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:34.035 10:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:34.035 10:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:34.035 10:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:34.035 10:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:34.035 10:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:34.035 10:40:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:34.294 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:34.294 fio-3.35 00:31:34.294 Starting 1 thread 00:31:34.294 EAL: No free 2048 kB hugepages reported on node 1 00:31:36.831 [2024-07-14 10:40:21.427160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c3b0 is same with the state(5) to be set 00:31:36.831 00:31:36.831 test: (groupid=0, jobs=1): err= 0: pid=2561451: Sun Jul 14 10:40:21 2024 00:31:36.831 read: IOPS=8038, BW=31.4MiB/s (32.9MB/s)(63.0MiB/2007msec) 00:31:36.831 slat (nsec): min=1569, max=92487, avg=1684.28, stdev=1029.00 00:31:36.831 clat (usec): min=447, max=169991, avg=8774.13, stdev=10276.29 00:31:36.831 lat (usec): min=449, max=170010, avg=8775.81, stdev=10276.44 00:31:36.831 clat percentiles (msec): 00:31:36.831 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:31:36.831 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 9], 00:31:36.831 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 10], 00:31:36.831 | 99.00th=[ 12], 99.50th=[ 13], 99.90th=[ 169], 99.95th=[ 169], 00:31:36.831 | 99.99th=[ 171] 00:31:36.831 bw ( KiB/s): min=23104, max=35328, per=99.99%, avg=32152.00, stdev=6034.62, samples=4 00:31:36.832 iops : min= 5776, max= 8832, avg=8038.00, stdev=1508.66, samples=4 00:31:36.832 write: IOPS=8017, BW=31.3MiB/s (32.8MB/s)(62.9MiB/2007msec); 0 zone resets 00:31:36.832 slat (nsec): min=1630, max=86578, avg=1768.29, stdev=752.81 00:31:36.832 clat (usec): min=308, max=168545, avg=7088.05, stdev=9613.67 00:31:36.832 lat (usec): min=309, max=168550, avg=7089.82, stdev=9613.86 00:31:36.832 clat percentiles (msec): 00:31:36.832 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7], 00:31:36.832 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:31:36.832 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 8], 95.00th=[ 8], 00:31:36.832 | 99.00th=[ 10], 99.50th=[ 11], 99.90th=[ 169], 99.95th=[ 169], 00:31:36.832 | 99.99th=[ 169] 00:31:36.832 bw ( KiB/s): min=24104, max=34960, per=99.92%, avg=32046.00, stdev=5302.14, samples=4 00:31:36.832 iops : min= 6026, max= 8740, avg=8011.50, stdev=1325.54, samples=4 00:31:36.832 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:31:36.832 lat (msec) : 2=0.05%, 4=0.21%, 10=98.58%, 20=0.74%, 250=0.40% 00:31:36.832 cpu : usr=72.03%, sys=26.72%, ctx=87, majf=0, minf=6 00:31:36.832 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:36.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:36.832 issued rwts: total=16134,16092,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.832 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:36.832 00:31:36.832 Run status group 0 (all jobs): 00:31:36.832 READ: bw=31.4MiB/s (32.9MB/s), 31.4MiB/s-31.4MiB/s (32.9MB/s-32.9MB/s), io=63.0MiB (66.1MB), run=2007-2007msec 00:31:36.832 WRITE: bw=31.3MiB/s (32.8MB/s), 31.3MiB/s-31.3MiB/s (32.8MB/s-32.8MB/s), io=62.9MiB (65.9MB), run=2007-2007msec 00:31:36.832 10:40:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:36.832 10:40:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:37.768 10:40:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=a157e533-686b-402d-8535-150b7bd0de29 00:31:37.768 10:40:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb a157e533-686b-402d-8535-150b7bd0de29 00:31:37.768 10:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=a157e533-686b-402d-8535-150b7bd0de29 00:31:37.768 10:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:37.768 10:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:37.768 10:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:37.768 10:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:38.030 10:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:38.030 { 00:31:38.030 "uuid": "75381e9e-d50b-4523-aeb2-0bbf53d6412f", 00:31:38.030 "name": "lvs_0", 00:31:38.030 "base_bdev": "Nvme0n1", 00:31:38.030 "total_data_clusters": 930, 00:31:38.030 "free_clusters": 0, 00:31:38.030 "block_size": 512, 00:31:38.030 "cluster_size": 1073741824 00:31:38.030 }, 00:31:38.030 { 00:31:38.030 "uuid": "a157e533-686b-402d-8535-150b7bd0de29", 00:31:38.030 "name": "lvs_n_0", 00:31:38.030 "base_bdev": "a6d126b4-df90-48f7-b2c2-ca378d984cc2", 00:31:38.030 "total_data_clusters": 237847, 00:31:38.030 "free_clusters": 237847, 00:31:38.030 "block_size": 512, 00:31:38.030 "cluster_size": 4194304 00:31:38.030 } 00:31:38.030 ]' 00:31:38.030 10:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="a157e533-686b-402d-8535-150b7bd0de29") .free_clusters' 00:31:38.030 10:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:31:38.030 10:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="a157e533-686b-402d-8535-150b7bd0de29") .cluster_size' 00:31:38.030 10:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:38.030 10:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:31:38.030 10:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:31:38.030 951388 00:31:38.030 10:40:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:38.598 26e6ec84-d310-4087-85f2-3c22c0c61780 00:31:38.598 10:40:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:38.872 10:40:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:39.130 10:40:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:39.130 10:40:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:39.130 10:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:39.130 10:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:39.130 10:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:39.130 10:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:39.130 10:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:39.130 10:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:39.130 10:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:39.130 10:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:39.130 10:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:39.130 10:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:39.130 10:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:39.130 10:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:39.130 10:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:39.130 10:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:39.130 10:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:39.130 10:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:39.130 10:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:39.130 10:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:39.130 10:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:39.130 10:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:39.130 10:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:39.389 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:39.389 fio-3.35 00:31:39.389 Starting 1 thread 00:31:39.648 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.182 00:31:42.182 test: (groupid=0, jobs=1): err= 0: pid=2562494: Sun Jul 14 10:40:26 2024 00:31:42.182 read: IOPS=7711, BW=30.1MiB/s (31.6MB/s)(60.5MiB/2007msec) 00:31:42.182 slat (nsec): min=1607, max=90841, avg=1715.04, stdev=1057.89 00:31:42.182 clat (usec): min=3116, max=15077, avg=9151.75, stdev=781.60 00:31:42.182 lat (usec): min=3119, max=15078, avg=9153.46, stdev=781.54 00:31:42.182 clat percentiles (usec): 00:31:42.182 | 1.00th=[ 7308], 5.00th=[ 7898], 10.00th=[ 8225], 20.00th=[ 8586], 00:31:42.182 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9372], 00:31:42.182 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10028], 95.00th=[10290], 00:31:42.182 | 99.00th=[10814], 99.50th=[10945], 99.90th=[12911], 99.95th=[14877], 00:31:42.182 | 99.99th=[15008] 00:31:42.182 bw ( KiB/s): min=29416, max=31544, per=99.92%, avg=30822.00, stdev=956.76, samples=4 00:31:42.182 iops : min= 7354, max= 7886, avg=7705.50, stdev=239.19, samples=4 00:31:42.182 write: IOPS=7704, BW=30.1MiB/s (31.6MB/s)(60.4MiB/2007msec); 0 zone resets 00:31:42.182 slat (nsec): min=1648, max=87167, avg=1776.49, stdev=777.08 00:31:42.182 clat (usec): min=1437, max=14053, avg=7330.74, stdev=662.26 00:31:42.182 lat (usec): min=1442, max=14055, avg=7332.52, stdev=662.24 00:31:42.182 clat percentiles (usec): 00:31:42.182 | 1.00th=[ 5800], 5.00th=[ 6325], 10.00th=[ 6587], 20.00th=[ 6849], 00:31:42.182 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7308], 60.00th=[ 7504], 00:31:42.182 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8094], 95.00th=[ 8291], 00:31:42.182 | 99.00th=[ 8717], 99.50th=[ 8979], 99.90th=[11076], 99.95th=[12780], 00:31:42.182 | 99.99th=[14091] 00:31:42.182 bw ( KiB/s): min=30672, max=31040, per=99.95%, avg=30804.00, stdev=163.89, samples=4 00:31:42.182 iops : min= 7668, max= 7760, avg=7701.00, stdev=40.97, samples=4 00:31:42.182 lat (msec) : 2=0.01%, 4=0.10%, 10=93.79%, 20=6.10% 00:31:42.182 cpu : usr=71.59%, sys=27.17%, ctx=114, majf=0, minf=6 00:31:42.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:42.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:42.182 issued rwts: total=15477,15463,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.182 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:42.182 00:31:42.182 Run status group 0 (all jobs): 00:31:42.182 READ: bw=30.1MiB/s (31.6MB/s), 30.1MiB/s-30.1MiB/s (31.6MB/s-31.6MB/s), io=60.5MiB (63.4MB), run=2007-2007msec 00:31:42.182 WRITE: bw=30.1MiB/s (31.6MB/s), 30.1MiB/s-30.1MiB/s (31.6MB/s-31.6MB/s), io=60.4MiB (63.3MB), run=2007-2007msec 00:31:42.182 10:40:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:42.182 10:40:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:42.182 10:40:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:46.374 10:40:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:46.374 10:40:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:48.909 10:40:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:49.168 10:40:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:51.070 10:40:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:51.070 10:40:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:51.070 10:40:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:51.070 10:40:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:51.070 10:40:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:31:51.070 10:40:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:51.070 10:40:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:31:51.070 10:40:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:51.071 10:40:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:51.071 rmmod nvme_tcp 00:31:51.071 rmmod nvme_fabrics 00:31:51.071 rmmod nvme_keyring 00:31:51.071 10:40:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:51.071 10:40:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:31:51.071 10:40:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:31:51.071 10:40:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2558654 ']' 00:31:51.071 10:40:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2558654 00:31:51.071 10:40:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 2558654 ']' 00:31:51.071 10:40:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 2558654 00:31:51.071 10:40:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:31:51.071 10:40:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:51.071 10:40:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2558654 00:31:51.071 10:40:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:51.071 10:40:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:51.071 10:40:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2558654' 00:31:51.071 killing process with pid 2558654 00:31:51.071 10:40:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 2558654 00:31:51.071 10:40:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 2558654 00:31:51.071 10:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:51.071 10:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:51.071 10:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:51.071 10:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:51.071 10:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:51.071 10:40:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:51.071 10:40:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:51.071 10:40:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.607 10:40:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:53.607 00:31:53.607 real 0m39.654s 00:31:53.607 user 2m39.074s 00:31:53.607 sys 0m8.641s 00:31:53.607 10:40:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:53.607 10:40:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.607 ************************************ 00:31:53.607 END TEST nvmf_fio_host 00:31:53.607 ************************************ 00:31:53.607 10:40:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:53.607 10:40:38 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:53.607 10:40:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:53.607 10:40:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:53.607 10:40:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:53.607 ************************************ 00:31:53.607 START TEST nvmf_failover 00:31:53.607 ************************************ 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:53.607 * Looking for test storage... 00:31:53.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:53.607 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:53.608 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:53.608 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:53.608 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:53.608 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:53.608 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:53.608 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:53.608 10:40:38 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:53.608 10:40:38 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:53.608 10:40:38 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:53.608 10:40:38 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:53.608 10:40:38 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:53.608 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:53.608 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:53.608 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:53.608 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:53.608 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:53.608 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.608 10:40:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:53.608 10:40:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.608 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:53.608 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:53.608 10:40:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:31:53.608 10:40:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:58.942 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:58.942 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:58.942 Found net devices under 0000:86:00.0: cvl_0_0 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:58.942 Found net devices under 0000:86:00.1: cvl_0_1 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:58.942 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:59.201 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:59.201 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:59.201 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:59.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:59.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:31:59.201 00:31:59.201 --- 10.0.0.2 ping statistics --- 00:31:59.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.201 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:31:59.201 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:59.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:59.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:31:59.201 00:31:59.201 --- 10.0.0.1 ping statistics --- 00:31:59.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.201 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:31:59.201 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:59.201 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:31:59.201 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:59.201 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:59.201 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:59.201 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:59.201 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:59.201 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:59.201 10:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:59.201 10:40:44 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:59.201 10:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:59.201 10:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:59.201 10:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:59.201 10:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2567618 00:31:59.201 10:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2567618 00:31:59.201 10:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:59.201 10:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2567618 ']' 00:31:59.201 10:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:59.201 10:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:59.201 10:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:59.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:59.201 10:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:59.201 10:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:59.201 [2024-07-14 10:40:44.062824] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:31:59.201 [2024-07-14 10:40:44.062873] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:59.201 EAL: No free 2048 kB hugepages reported on node 1 00:31:59.201 [2024-07-14 10:40:44.120510] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:59.201 [2024-07-14 10:40:44.162689] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:59.201 [2024-07-14 10:40:44.162726] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:59.201 [2024-07-14 10:40:44.162732] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:59.201 [2024-07-14 10:40:44.162738] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:59.201 [2024-07-14 10:40:44.162743] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:59.201 [2024-07-14 10:40:44.162794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:59.201 [2024-07-14 10:40:44.162901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.201 [2024-07-14 10:40:44.162903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:59.459 10:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:59.459 10:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:31:59.459 10:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:59.459 10:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:59.460 10:40:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:59.460 10:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:59.460 10:40:44 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:59.718 [2024-07-14 10:40:44.453069] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:59.718 10:40:44 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:59.718 Malloc0 00:31:59.718 10:40:44 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:59.976 10:40:44 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:00.234 10:40:45 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:00.234 [2024-07-14 10:40:45.197930] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:00.493 10:40:45 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:00.493 [2024-07-14 10:40:45.370406] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:00.493 10:40:45 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:00.752 [2024-07-14 10:40:45.538951] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:00.753 10:40:45 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:00.753 10:40:45 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2567875 00:32:00.753 10:40:45 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:00.753 10:40:45 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2567875 /var/tmp/bdevperf.sock 00:32:00.753 10:40:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2567875 ']' 00:32:00.753 10:40:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:00.753 10:40:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:00.753 10:40:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:00.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:00.753 10:40:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:00.753 10:40:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:01.011 10:40:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:01.011 10:40:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:32:01.011 10:40:45 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:01.271 NVMe0n1 00:32:01.271 10:40:46 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:01.530 00:32:01.530 10:40:46 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2568098 00:32:01.530 10:40:46 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:01.530 10:40:46 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:02.468 10:40:47 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:02.727 [2024-07-14 10:40:47.514177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1510270 is same with the state(5) to be set 00:32:02.727 10:40:47 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:06.016 10:40:50 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:06.016 00:32:06.016 10:40:50 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:06.276 [2024-07-14 10:40:51.039914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.039955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.039962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.039969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.039976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.039982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.039988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.039994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 [2024-07-14 10:40:51.040286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511630 is same with the state(5) to be set 00:32:06.276 10:40:51 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:09.566 10:40:54 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:09.566 [2024-07-14 10:40:54.238191] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:09.566 10:40:54 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:10.502 10:40:55 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:10.502 10:40:55 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2568098 00:32:17.080 0 00:32:17.080 10:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2567875 00:32:17.080 10:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2567875 ']' 00:32:17.080 10:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2567875 00:32:17.080 10:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:32:17.080 10:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:17.080 10:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2567875 00:32:17.080 10:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:17.080 10:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:17.080 10:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2567875' 00:32:17.080 killing process with pid 2567875 00:32:17.080 10:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2567875 00:32:17.080 10:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2567875 00:32:17.080 10:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:17.080 [2024-07-14 10:40:45.598411] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:32:17.080 [2024-07-14 10:40:45.598461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2567875 ] 00:32:17.080 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.080 [2024-07-14 10:40:45.665516] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.080 [2024-07-14 10:40:45.706000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.080 Running I/O for 15 seconds... 00:32:17.080 [2024-07-14 10:40:47.514441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.080 [2024-07-14 10:40:47.514478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.080 [2024-07-14 10:40:47.514495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.080 [2024-07-14 10:40:47.514503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.080 [2024-07-14 10:40:47.514513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.080 [2024-07-14 10:40:47.514521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.080 [2024-07-14 10:40:47.514529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.080 [2024-07-14 10:40:47.514536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.080 [2024-07-14 10:40:47.514544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.080 [2024-07-14 10:40:47.514551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.080 [2024-07-14 10:40:47.514560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.080 [2024-07-14 10:40:47.514566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.080 [2024-07-14 10:40:47.514575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.080 [2024-07-14 10:40:47.514582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.080 [2024-07-14 10:40:47.514591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.080 [2024-07-14 10:40:47.514597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.080 [2024-07-14 10:40:47.514606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.080 [2024-07-14 10:40:47.514613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.080 [2024-07-14 10:40:47.514621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.080 [2024-07-14 10:40:47.514627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.080 [2024-07-14 10:40:47.514636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.080 [2024-07-14 10:40:47.514643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.080 [2024-07-14 10:40:47.514658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.080 [2024-07-14 10:40:47.514665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.080 [2024-07-14 10:40:47.514674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.080 [2024-07-14 10:40:47.514681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.080 [2024-07-14 10:40:47.514689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.080 [2024-07-14 10:40:47.514696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.080 [2024-07-14 10:40:47.514705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.080 [2024-07-14 10:40:47.514712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.080 [2024-07-14 10:40:47.514720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.080 [2024-07-14 10:40:47.514727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.080 [2024-07-14 10:40:47.514735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.080 [2024-07-14 10:40:47.514742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.080 [2024-07-14 10:40:47.514751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.080 [2024-07-14 10:40:47.514758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.080 [2024-07-14 10:40:47.514766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.080 [2024-07-14 10:40:47.514772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.080 [2024-07-14 10:40:47.514781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.080 [2024-07-14 10:40:47.514787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.080 [2024-07-14 10:40:47.514796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.080 [2024-07-14 10:40:47.514803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.080 [2024-07-14 10:40:47.514810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.080 [2024-07-14 10:40:47.514817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.080 [2024-07-14 10:40:47.514825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.080 [2024-07-14 10:40:47.514832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.080 [2024-07-14 10:40:47.514840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.080 [2024-07-14 10:40:47.514849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.080 [2024-07-14 10:40:47.514857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.080 [2024-07-14 10:40:47.514865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.080 [2024-07-14 10:40:47.514873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.080 [2024-07-14 10:40:47.514880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.080 [2024-07-14 10:40:47.514888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.080 [2024-07-14 10:40:47.514896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.080 [2024-07-14 10:40:47.514905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.080 [2024-07-14 10:40:47.514913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.080 [2024-07-14 10:40:47.514922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.080 [2024-07-14 10:40:47.514930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.514938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.081 [2024-07-14 10:40:47.514945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.514953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.514959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.514969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.514978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.514987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.514995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.081 [2024-07-14 10:40:47.515589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.081 [2024-07-14 10:40:47.515597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.515604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.515613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.515619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.515628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.515636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.515645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.515652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.515660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.515667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.515675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.515683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.515691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.515698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.515706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.515713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.515721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.515728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.515736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.515744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.515753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.515759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.515767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.515774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.515782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.515789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.515797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.515804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.515812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.515819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.515827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.515838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.515847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.515855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.515863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.515870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.515878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.515885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.515893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.515900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.515908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.515915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.515923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.515930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.515938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.515945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.515953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.515960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.515973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.515980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.515988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.515995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.516003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.516011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.516019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.516026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.516035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.516042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.516050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.516056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.516065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.516072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.516080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.516087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.516095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.516101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.516110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.516117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.516125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.516132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.516140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.516146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.516154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.516161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.516170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.516177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.516184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.516191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.516199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.516206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.516218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.516232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.516244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.516251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.516260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.082 [2024-07-14 10:40:47.516267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.082 [2024-07-14 10:40:47.516275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:47.516281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:47.516290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:47.516297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:47.516305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:47.516312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:47.516319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:47.516326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:47.516334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:47.516340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:47.516349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:47.516356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:47.516364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:47.516371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:47.516379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:47.516386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:47.516394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:47.516401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:47.516409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:47.516416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:47.516424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:47.516431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:47.516439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:47.516446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:47.516455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.083 [2024-07-14 10:40:47.516462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:47.516470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce5c0 is same with the state(5) to be set 00:32:17.083 [2024-07-14 10:40:47.516479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.083 [2024-07-14 10:40:47.516486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.083 [2024-07-14 10:40:47.516492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96064 len:8 PRP1 0x0 PRP2 0x0 00:32:17.083 [2024-07-14 10:40:47.516499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:47.516542] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18ce5c0 was disconnected and freed. reset controller. 00:32:17.083 [2024-07-14 10:40:47.516550] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:17.083 [2024-07-14 10:40:47.516573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.083 [2024-07-14 10:40:47.516580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:47.516588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.083 [2024-07-14 10:40:47.516594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:47.516601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.083 [2024-07-14 10:40:47.516609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:47.516615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.083 [2024-07-14 10:40:47.516622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:47.516628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.083 [2024-07-14 10:40:47.519510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.083 [2024-07-14 10:40:47.519539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a7fd0 (9): Bad file descriptor 00:32:17.083 [2024-07-14 10:40:47.593297] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:17.083 [2024-07-14 10:40:51.040680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:27464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:51.040715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:51.040730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:27472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:51.040742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:51.040751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:27480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:51.040758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:51.040766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:27488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:51.040774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:51.040783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:51.040789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:51.040797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:27504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:51.040804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:51.040812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:51.040818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:51.040826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:27520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:51.040833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:51.040841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:51.040847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:51.040855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:27536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:51.040862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:51.040870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:51.040876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:51.040884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:51.040891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:51.040899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:51.040905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:51.040913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:27568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:51.040919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:51.040929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:27576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:51.040935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:51.040944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:27584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:51.040950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:51.040957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:51.040965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:51.040973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:27600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:51.040980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:51.040988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:27608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:51.040994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:51.041002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:51.041008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.083 [2024-07-14 10:40:51.041017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.083 [2024-07-14 10:40:51.041023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.084 [2024-07-14 10:40:51.041031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.084 [2024-07-14 10:40:51.041038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.084 [2024-07-14 10:40:51.041045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.084 [2024-07-14 10:40:51.041052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.084 [2024-07-14 10:40:51.041060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.084 [2024-07-14 10:40:51.041066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.084 [2024-07-14 10:40:51.041074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:27656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.084 [2024-07-14 10:40:51.041081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.084 [2024-07-14 10:40:51.041089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.084 [2024-07-14 10:40:51.041095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.084 [2024-07-14 10:40:51.041104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.084 [2024-07-14 10:40:51.041113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.084 [2024-07-14 10:40:51.041121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.084 [2024-07-14 10:40:51.041127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.084 [2024-07-14 10:40:51.041136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.084 [2024-07-14 10:40:51.041143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.084 [2024-07-14 10:40:51.041152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.084 [2024-07-14 10:40:51.041158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.084 [2024-07-14 10:40:51.041166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.084 [2024-07-14 10:40:51.041173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.084 [2024-07-14 10:40:51.041182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:27712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.084 [2024-07-14 10:40:51.041188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.084 [2024-07-14 10:40:51.041196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.084 [2024-07-14 10:40:51.041203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.084 [2024-07-14 10:40:51.041211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:27728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.084 [2024-07-14 10:40:51.041218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.084 [2024-07-14 10:40:51.041230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.084 [2024-07-14 10:40:51.041238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.084 [2024-07-14 10:40:51.041246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:27744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.084 [2024-07-14 10:40:51.041253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.084 [2024-07-14 10:40:51.041261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.084 [2024-07-14 10:40:51.041268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.084 [2024-07-14 10:40:51.041276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:27760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.084 [2024-07-14 10:40:51.041282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.084 [2024-07-14 10:40:51.041291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:27768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.084 [2024-07-14 10:40:51.041298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.084 [2024-07-14 10:40:51.041307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:27776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.084 [2024-07-14 10:40:51.041314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.084 [2024-07-14 10:40:51.041322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.084 [2024-07-14 10:40:51.041329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.084 [2024-07-14 10:40:51.041338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:27792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.085 [2024-07-14 10:40:51.041345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.085 [2024-07-14 10:40:51.041359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:27808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.085 [2024-07-14 10:40:51.041374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:27816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.085 [2024-07-14 10:40:51.041389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:27824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.085 [2024-07-14 10:40:51.041403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.085 [2024-07-14 10:40:51.041417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.085 [2024-07-14 10:40:51.041431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:27856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:27872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:27880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:27888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:27896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:27912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:27920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:27944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:27968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.085 [2024-07-14 10:40:51.041665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:27976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:28000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:28008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:28016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:28024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:28032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:28040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:28048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:28056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:28064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:28072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:28080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:28096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:28104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:28112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.085 [2024-07-14 10:40:51.041950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:28128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.085 [2024-07-14 10:40:51.041956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.041965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:28136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.086 [2024-07-14 10:40:51.041971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.041979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:28144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.086 [2024-07-14 10:40:51.041986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.041994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:28152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.086 [2024-07-14 10:40:51.042000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.086 [2024-07-14 10:40:51.042014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:28168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.086 [2024-07-14 10:40:51.042028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:28176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.086 [2024-07-14 10:40:51.042044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:28184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.086 [2024-07-14 10:40:51.042058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:28192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.086 [2024-07-14 10:40:51.042072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.086 [2024-07-14 10:40:51.042087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:28208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.086 [2024-07-14 10:40:51.042101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:28216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.086 [2024-07-14 10:40:51.042115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:28224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.086 [2024-07-14 10:40:51.042130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:28232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.086 [2024-07-14 10:40:51.042146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:28240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.086 [2024-07-14 10:40:51.042160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:28248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.086 [2024-07-14 10:40:51.042174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:28256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.086 [2024-07-14 10:40:51.042189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:28264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.086 [2024-07-14 10:40:51.042203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:28272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.086 [2024-07-14 10:40:51.042217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:28280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.086 [2024-07-14 10:40:51.042235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.086 [2024-07-14 10:40:51.042250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:28296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.086 [2024-07-14 10:40:51.042264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:28304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.086 [2024-07-14 10:40:51.042278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.086 [2024-07-14 10:40:51.042292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:28320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.086 [2024-07-14 10:40:51.042307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:28328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.086 [2024-07-14 10:40:51.042321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:28336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.086 [2024-07-14 10:40:51.042335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.086 [2024-07-14 10:40:51.042353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.086 [2024-07-14 10:40:51.042368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.086 [2024-07-14 10:40:51.042413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28360 len:8 PRP1 0x0 PRP2 0x0 00:32:17.086 [2024-07-14 10:40:51.042419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.086 [2024-07-14 10:40:51.042434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.086 [2024-07-14 10:40:51.042440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28368 len:8 PRP1 0x0 PRP2 0x0 00:32:17.086 [2024-07-14 10:40:51.042446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.086 [2024-07-14 10:40:51.042460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.086 [2024-07-14 10:40:51.042465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28376 len:8 PRP1 0x0 PRP2 0x0 00:32:17.086 [2024-07-14 10:40:51.042472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.086 [2024-07-14 10:40:51.042484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.086 [2024-07-14 10:40:51.042490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28384 len:8 PRP1 0x0 PRP2 0x0 00:32:17.086 [2024-07-14 10:40:51.042496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.086 [2024-07-14 10:40:51.042508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.086 [2024-07-14 10:40:51.042514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28392 len:8 PRP1 0x0 PRP2 0x0 00:32:17.086 [2024-07-14 10:40:51.042520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.086 [2024-07-14 10:40:51.042531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.086 [2024-07-14 10:40:51.042537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28400 len:8 PRP1 0x0 PRP2 0x0 00:32:17.086 [2024-07-14 10:40:51.042543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.086 [2024-07-14 10:40:51.042555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.086 [2024-07-14 10:40:51.042560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28408 len:8 PRP1 0x0 PRP2 0x0 00:32:17.086 [2024-07-14 10:40:51.042567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.086 [2024-07-14 10:40:51.042579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.086 [2024-07-14 10:40:51.042584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28416 len:8 PRP1 0x0 PRP2 0x0 00:32:17.086 [2024-07-14 10:40:51.042591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.086 [2024-07-14 10:40:51.042597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.086 [2024-07-14 10:40:51.042604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.086 [2024-07-14 10:40:51.042609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28424 len:8 PRP1 0x0 PRP2 0x0 00:32:17.086 [2024-07-14 10:40:51.042616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:51.042622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.087 [2024-07-14 10:40:51.042627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.087 [2024-07-14 10:40:51.042632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28432 len:8 PRP1 0x0 PRP2 0x0 00:32:17.087 [2024-07-14 10:40:51.042640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:51.042646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.087 [2024-07-14 10:40:51.042651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.087 [2024-07-14 10:40:51.042657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28440 len:8 PRP1 0x0 PRP2 0x0 00:32:17.087 [2024-07-14 10:40:51.042663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:51.042670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.087 [2024-07-14 10:40:51.042674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.087 [2024-07-14 10:40:51.042680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28448 len:8 PRP1 0x0 PRP2 0x0 00:32:17.087 [2024-07-14 10:40:51.042686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:51.042692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.087 [2024-07-14 10:40:51.042697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.087 [2024-07-14 10:40:51.042703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28456 len:8 PRP1 0x0 PRP2 0x0 00:32:17.087 [2024-07-14 10:40:51.042710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:51.042717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.087 [2024-07-14 10:40:51.042722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.087 [2024-07-14 10:40:51.042727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28464 len:8 PRP1 0x0 PRP2 0x0 00:32:17.087 [2024-07-14 10:40:51.042734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:51.053974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.087 [2024-07-14 10:40:51.053986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.087 [2024-07-14 10:40:51.053994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28472 len:8 PRP1 0x0 PRP2 0x0 00:32:17.087 [2024-07-14 10:40:51.054002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:51.054010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.087 [2024-07-14 10:40:51.054016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.087 [2024-07-14 10:40:51.054023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28480 len:8 PRP1 0x0 PRP2 0x0 00:32:17.087 [2024-07-14 10:40:51.054030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:51.054075] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a72e00 was disconnected and freed. reset controller. 00:32:17.087 [2024-07-14 10:40:51.054086] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:17.087 [2024-07-14 10:40:51.054109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.087 [2024-07-14 10:40:51.054118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:51.054127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.087 [2024-07-14 10:40:51.054137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:51.054145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.087 [2024-07-14 10:40:51.054153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:51.054162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.087 [2024-07-14 10:40:51.054169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:51.054176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.087 [2024-07-14 10:40:51.054200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a7fd0 (9): Bad file descriptor 00:32:17.087 [2024-07-14 10:40:51.057536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.087 [2024-07-14 10:40:51.220618] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:17.087 [2024-07-14 10:40:55.436258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:69016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.087 [2024-07-14 10:40:55.436304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:55.436321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:69024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.087 [2024-07-14 10:40:55.436328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:55.436337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.087 [2024-07-14 10:40:55.436344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:55.436352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:69040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.087 [2024-07-14 10:40:55.436359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:55.436367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:69048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.087 [2024-07-14 10:40:55.436373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:55.436381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:69056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.087 [2024-07-14 10:40:55.436388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:55.436396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:69064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.087 [2024-07-14 10:40:55.436403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:55.436411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:68632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.087 [2024-07-14 10:40:55.436417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:55.436425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:68640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.087 [2024-07-14 10:40:55.436437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:55.436446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:68648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.087 [2024-07-14 10:40:55.436452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:55.436460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:68656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.087 [2024-07-14 10:40:55.436467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:55.436475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:68664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.087 [2024-07-14 10:40:55.436481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:55.436489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:68672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.087 [2024-07-14 10:40:55.436496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:55.436504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:68680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.087 [2024-07-14 10:40:55.436510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:55.436518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:68688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.087 [2024-07-14 10:40:55.436525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:55.436533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:68696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.087 [2024-07-14 10:40:55.436539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:55.436547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:68704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.087 [2024-07-14 10:40:55.436553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:55.436562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:68712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.087 [2024-07-14 10:40:55.436569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:55.436577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:68720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.087 [2024-07-14 10:40:55.436583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:55.436591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:68728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.087 [2024-07-14 10:40:55.436598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:55.436605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:68736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.087 [2024-07-14 10:40:55.436613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:55.436622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:68744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.087 [2024-07-14 10:40:55.436629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:55.436637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.087 [2024-07-14 10:40:55.436644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.087 [2024-07-14 10:40:55.436651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:68760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.087 [2024-07-14 10:40:55.436658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.436666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:68768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.088 [2024-07-14 10:40:55.436673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.436680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:68776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.088 [2024-07-14 10:40:55.436687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.436694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:68784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.088 [2024-07-14 10:40:55.436700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.436708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:68792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.088 [2024-07-14 10:40:55.436715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.436722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:68800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.088 [2024-07-14 10:40:55.436729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.436737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:68808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.088 [2024-07-14 10:40:55.436743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.436751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:68816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.088 [2024-07-14 10:40:55.436757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.436765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:69072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.436772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.436780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:69080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.436786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.436794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.436805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.436815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:69096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.436825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.436833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.436839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.436848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:69112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.436854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.436862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:69120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.436868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.436876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:69128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.436882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.436890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:69136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.436896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.436904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:69144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.436911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.436918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.436925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.436932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:69160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.436938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.436946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:69168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.436952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.436961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:69176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.436967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.436975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:69184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.436983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.436991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:69192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.436999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.437007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:69200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.437013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.437021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.437027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.437035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:69216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.437041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.437049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:69224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.437056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.437064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.437071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.437078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:69240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.437085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.437093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:69248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.437099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.437106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.437113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.437121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:69264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.437127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.437135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:69272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.437141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.437150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:69280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.437156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.437164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:69288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.437170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.437180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:69296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.437186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.088 [2024-07-14 10:40:55.437194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:69304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.088 [2024-07-14 10:40:55.437200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:69312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:69320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:68824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.089 [2024-07-14 10:40:55.437262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:69336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:69344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:69352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:69368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:69384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:69400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:69408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:69416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:69424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:69440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:69448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:68832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.089 [2024-07-14 10:40:55.437493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:68840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.089 [2024-07-14 10:40:55.437507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:68848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.089 [2024-07-14 10:40:55.437521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:68856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.089 [2024-07-14 10:40:55.437536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:68864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.089 [2024-07-14 10:40:55.437551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:68872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.089 [2024-07-14 10:40:55.437565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:68880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.089 [2024-07-14 10:40:55.437579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:69456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:69472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:69480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:69488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:69496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:69512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:69520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:69528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:69536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:69544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:69552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:69560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:69568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:69576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.089 [2024-07-14 10:40:55.437818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:69584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.089 [2024-07-14 10:40:55.437824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.090 [2024-07-14 10:40:55.437832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:69592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.090 [2024-07-14 10:40:55.437838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.090 [2024-07-14 10:40:55.437846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:69600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.090 [2024-07-14 10:40:55.437853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.090 [2024-07-14 10:40:55.437860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:69608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.090 [2024-07-14 10:40:55.437867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.090 [2024-07-14 10:40:55.437874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:69616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.090 [2024-07-14 10:40:55.437881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.090 [2024-07-14 10:40:55.437888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:69624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.090 [2024-07-14 10:40:55.437895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.090 [2024-07-14 10:40:55.437903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:69632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.090 [2024-07-14 10:40:55.437910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.090 [2024-07-14 10:40:55.437918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.090 [2024-07-14 10:40:55.437925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.090 [2024-07-14 10:40:55.437933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:68888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.090 [2024-07-14 10:40:55.437939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.090 [2024-07-14 10:40:55.437947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:68896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.090 [2024-07-14 10:40:55.437953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.090 [2024-07-14 10:40:55.437961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:68904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.090 [2024-07-14 10:40:55.437968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.090 [2024-07-14 10:40:55.437975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:68912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.090 [2024-07-14 10:40:55.437981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.090 [2024-07-14 10:40:55.437989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:68920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.090 [2024-07-14 10:40:55.437995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.090 [2024-07-14 10:40:55.438003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:68928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.090 [2024-07-14 10:40:55.438010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.090 [2024-07-14 10:40:55.438018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:68936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.090 [2024-07-14 10:40:55.438024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.090 [2024-07-14 10:40:55.438044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.090 [2024-07-14 10:40:55.438050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68944 len:8 PRP1 0x0 PRP2 0x0 00:32:17.090 [2024-07-14 10:40:55.438057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.090 [2024-07-14 10:40:55.438066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.090 [2024-07-14 10:40:55.438071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.090 [2024-07-14 10:40:55.438077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69648 len:8 PRP1 0x0 PRP2 0x0 00:32:17.090 [2024-07-14 10:40:55.438083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.090 [2024-07-14 10:40:55.438090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.090 [2024-07-14 10:40:55.438095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.090 [2024-07-14 10:40:55.438100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68952 len:8 PRP1 0x0 PRP2 0x0 00:32:17.090 [2024-07-14 10:40:55.438108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.090 [2024-07-14 10:40:55.438114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.090 [2024-07-14 10:40:55.438119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.090 [2024-07-14 10:40:55.438125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68960 len:8 PRP1 0x0 PRP2 0x0 00:32:17.090 [2024-07-14 10:40:55.438131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.090 [2024-07-14 10:40:55.438138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.090 [2024-07-14 10:40:55.438143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.090 [2024-07-14 10:40:55.438148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68968 len:8 PRP1 0x0 PRP2 0x0 00:32:17.090 [2024-07-14 10:40:55.438154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.090 [2024-07-14 10:40:55.438161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.090 [2024-07-14 10:40:55.438167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.090 [2024-07-14 10:40:55.438174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68976 len:8 PRP1 0x0 PRP2 0x0 00:32:17.090 [2024-07-14 10:40:55.438180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.090 [2024-07-14 10:40:55.438186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.090 [2024-07-14 10:40:55.438191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.090 [2024-07-14 10:40:55.438197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68984 len:8 PRP1 0x0 PRP2 0x0 00:32:17.090 [2024-07-14 10:40:55.438203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.090 [2024-07-14 10:40:55.438210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.090 [2024-07-14 10:40:55.438215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.090 [2024-07-14 10:40:55.438220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68992 len:8 PRP1 0x0 PRP2 0x0 00:32:17.090 [2024-07-14 10:40:55.438230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.090 [2024-07-14 10:40:55.438237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.090 [2024-07-14 10:40:55.438242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.090 [2024-07-14 10:40:55.438248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69000 len:8 PRP1 0x0 PRP2 0x0 00:32:17.090 [2024-07-14 10:40:55.438254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.090 [2024-07-14 10:40:55.438261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.090 [2024-07-14 10:40:55.438266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.090 [2024-07-14 10:40:55.438272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69008 len:8 PRP1 0x0 PRP2 0x0 00:32:17.090 [2024-07-14 10:40:55.438279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.090 [2024-07-14 10:40:55.438320] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a72bf0 was disconnected and freed. reset controller. 00:32:17.090 [2024-07-14 10:40:55.438331] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:17.090 [2024-07-14 10:40:55.438351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.090 [2024-07-14 10:40:55.438359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.090 [2024-07-14 10:40:55.438366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.090 [2024-07-14 10:40:55.438372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.090 [2024-07-14 10:40:55.438379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.090 [2024-07-14 10:40:55.438386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.090 [2024-07-14 10:40:55.438393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.090 [2024-07-14 10:40:55.438400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.090 [2024-07-14 10:40:55.438407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.090 [2024-07-14 10:40:55.438430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a7fd0 (9): Bad file descriptor 00:32:17.090 [2024-07-14 10:40:55.441268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.090 [2024-07-14 10:40:55.523193] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:17.090 00:32:17.090 Latency(us) 00:32:17.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.090 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:17.090 Verification LBA range: start 0x0 length 0x4000 00:32:17.090 NVMe0n1 : 15.00 10761.75 42.04 959.55 0.00 10898.05 630.43 20173.69 00:32:17.090 =================================================================================================================== 00:32:17.090 Total : 10761.75 42.04 959.55 0.00 10898.05 630.43 20173.69 00:32:17.090 Received shutdown signal, test time was about 15.000000 seconds 00:32:17.090 00:32:17.090 Latency(us) 00:32:17.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.090 =================================================================================================================== 00:32:17.090 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:17.090 10:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:17.090 10:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:17.090 10:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:17.091 10:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2570429 00:32:17.091 10:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:17.091 10:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2570429 /var/tmp/bdevperf.sock 00:32:17.091 10:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2570429 ']' 00:32:17.091 10:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:17.091 10:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:17.091 10:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:17.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:17.091 10:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:17.091 10:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:17.091 10:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:17.091 10:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:32:17.091 10:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:17.350 [2024-07-14 10:41:02.119780] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:17.350 10:41:02 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:17.350 [2024-07-14 10:41:02.304294] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:17.609 10:41:02 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:17.868 NVMe0n1 00:32:17.869 10:41:02 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:18.128 00:32:18.128 10:41:03 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:18.387 00:32:18.387 10:41:03 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:18.387 10:41:03 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:18.645 10:41:03 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:18.904 10:41:03 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:22.192 10:41:06 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:22.192 10:41:06 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:22.192 10:41:06 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:22.192 10:41:06 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2571320 00:32:22.192 10:41:06 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2571320 00:32:23.192 0 00:32:23.192 10:41:07 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:23.192 [2024-07-14 10:41:01.764872] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:32:23.192 [2024-07-14 10:41:01.764924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2570429 ] 00:32:23.192 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.192 [2024-07-14 10:41:01.833765] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.192 [2024-07-14 10:41:01.871130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.192 [2024-07-14 10:41:03.628033] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:23.192 [2024-07-14 10:41:03.628078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.192 [2024-07-14 10:41:03.628089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.192 [2024-07-14 10:41:03.628098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.192 [2024-07-14 10:41:03.628105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.192 [2024-07-14 10:41:03.628111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.192 [2024-07-14 10:41:03.628118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.193 [2024-07-14 10:41:03.628125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.193 [2024-07-14 10:41:03.628132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.193 [2024-07-14 10:41:03.628139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.193 [2024-07-14 10:41:03.628163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.193 [2024-07-14 10:41:03.628177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2081fd0 (9): Bad file descriptor 00:32:23.193 [2024-07-14 10:41:03.630741] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:23.193 Running I/O for 1 seconds... 00:32:23.193 00:32:23.193 Latency(us) 00:32:23.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.193 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:23.193 Verification LBA range: start 0x0 length 0x4000 00:32:23.193 NVMe0n1 : 1.00 10886.77 42.53 0.00 0.00 11712.77 740.84 9004.08 00:32:23.193 =================================================================================================================== 00:32:23.193 Total : 10886.77 42.53 0.00 0.00 11712.77 740.84 9004.08 00:32:23.193 10:41:07 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:23.193 10:41:07 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:23.193 10:41:08 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:23.452 10:41:08 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:23.452 10:41:08 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:23.710 10:41:08 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:23.710 10:41:08 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:26.994 10:41:11 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:26.994 10:41:11 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:26.995 10:41:11 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2570429 00:32:26.995 10:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2570429 ']' 00:32:26.995 10:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2570429 00:32:26.995 10:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:32:26.995 10:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:26.995 10:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2570429 00:32:26.995 10:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:26.995 10:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:26.995 10:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2570429' 00:32:26.995 killing process with pid 2570429 00:32:26.995 10:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2570429 00:32:26.995 10:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2570429 00:32:27.253 10:41:12 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:27.253 10:41:12 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:27.513 10:41:12 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:27.513 10:41:12 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:27.513 10:41:12 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:27.513 10:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:27.513 10:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:32:27.513 10:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:27.513 10:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:32:27.513 10:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:27.513 10:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:27.513 rmmod nvme_tcp 00:32:27.513 rmmod nvme_fabrics 00:32:27.513 rmmod nvme_keyring 00:32:27.513 10:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:27.513 10:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:32:27.513 10:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:32:27.513 10:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2567618 ']' 00:32:27.513 10:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2567618 00:32:27.513 10:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2567618 ']' 00:32:27.513 10:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2567618 00:32:27.513 10:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:32:27.513 10:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:27.513 10:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2567618 00:32:27.513 10:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:27.513 10:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:27.513 10:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2567618' 00:32:27.513 killing process with pid 2567618 00:32:27.513 10:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2567618 00:32:27.513 10:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2567618 00:32:27.772 10:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:27.772 10:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:27.772 10:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:27.772 10:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:27.772 10:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:27.772 10:41:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:27.772 10:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:27.772 10:41:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:29.678 10:41:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:29.678 00:32:29.678 real 0m36.488s 00:32:29.678 user 1m55.535s 00:32:29.678 sys 0m7.617s 00:32:29.678 10:41:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:29.678 10:41:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:29.678 ************************************ 00:32:29.678 END TEST nvmf_failover 00:32:29.678 ************************************ 00:32:29.937 10:41:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:29.937 10:41:14 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:29.937 10:41:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:29.937 10:41:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:29.937 10:41:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:29.937 ************************************ 00:32:29.937 START TEST nvmf_host_discovery 00:32:29.937 ************************************ 00:32:29.937 10:41:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:29.937 * Looking for test storage... 00:32:29.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:29.937 10:41:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:29.937 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:29.937 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:29.937 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:29.937 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:29.937 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:29.937 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:29.937 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:29.937 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:29.937 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:29.937 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:29.937 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:29.937 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:29.937 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:29.937 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:29.937 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:29.937 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:29.937 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:29.937 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:29.937 10:41:14 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:29.937 10:41:14 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:29.937 10:41:14 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:29.937 10:41:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:32:29.938 10:41:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.504 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:36.504 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:32:36.504 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:36.504 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:36.505 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:36.505 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:36.505 Found net devices under 0000:86:00.0: cvl_0_0 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:36.505 Found net devices under 0000:86:00.1: cvl_0_1 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:36.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:36.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:32:36.505 00:32:36.505 --- 10.0.0.2 ping statistics --- 00:32:36.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.505 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:36.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:36.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:32:36.505 00:32:36.505 --- 10.0.0.1 ping statistics --- 00:32:36.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.505 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2575636 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2575636 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2575636 ']' 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:36.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:36.505 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.506 [2024-07-14 10:41:20.628495] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:32:36.506 [2024-07-14 10:41:20.628543] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:36.506 EAL: No free 2048 kB hugepages reported on node 1 00:32:36.506 [2024-07-14 10:41:20.700327] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.506 [2024-07-14 10:41:20.740274] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:36.506 [2024-07-14 10:41:20.740312] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:36.506 [2024-07-14 10:41:20.740319] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:36.506 [2024-07-14 10:41:20.740325] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:36.506 [2024-07-14 10:41:20.740330] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:36.506 [2024-07-14 10:41:20.740363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.506 [2024-07-14 10:41:20.869468] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.506 [2024-07-14 10:41:20.881612] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.506 null0 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.506 null1 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2575776 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2575776 /tmp/host.sock 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2575776 ']' 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:36.506 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:36.506 10:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.506 [2024-07-14 10:41:20.956988] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:32:36.506 [2024-07-14 10:41:20.957028] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2575776 ] 00:32:36.506 EAL: No free 2048 kB hugepages reported on node 1 00:32:36.506 [2024-07-14 10:41:21.024538] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.506 [2024-07-14 10:41:21.064492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:37.073 10:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.074 10:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.074 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:37.074 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:37.074 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:37.074 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:37.074 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.074 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:37.074 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.074 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:37.074 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.333 [2024-07-14 10:41:22.068746] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.333 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:37.334 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:37.334 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:32:37.334 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:37.334 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:37.334 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.334 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.334 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.334 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:37.334 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:37.334 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:37.334 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:37.334 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:37.334 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:32:37.334 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:37.334 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:37.334 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.334 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:37.334 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.334 10:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:37.334 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.334 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:32:37.334 10:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:32:37.902 [2024-07-14 10:41:22.768490] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:37.902 [2024-07-14 10:41:22.768510] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:37.902 [2024-07-14 10:41:22.768521] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:38.161 [2024-07-14 10:41:22.896917] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:38.161 [2024-07-14 10:41:23.125273] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:38.161 [2024-07-14 10:41:23.125291] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:38.421 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.680 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:32:38.680 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:38.680 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:38.680 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:38.680 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:38.680 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:38.680 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:38.680 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:38.680 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:38.680 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:32:38.680 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:38.680 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:38.680 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.680 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.680 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.680 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:38.680 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:38.680 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:32:38.680 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:38.680 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:38.680 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.680 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.680 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.680 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:38.680 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:38.680 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:38.680 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:38.680 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.681 [2024-07-14 10:41:23.572858] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:38.681 [2024-07-14 10:41:23.573153] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:38.681 [2024-07-14 10:41:23.573174] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:38.681 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.940 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:38.940 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:38.940 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:38.940 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:38.940 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:38.940 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:38.940 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:38.940 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:32:38.940 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:38.940 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:38.940 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.940 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:38.940 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.940 10:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:38.940 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.940 [2024-07-14 10:41:23.700561] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:38.940 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:38.940 10:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:32:39.199 [2024-07-14 10:41:24.005874] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:39.200 [2024-07-14 10:41:24.005892] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:39.200 [2024-07-14 10:41:24.005897] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:39.768 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:39.768 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:39.768 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:32:39.768 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:39.768 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:39.768 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.768 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:39.768 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.768 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:40.027 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.027 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:40.027 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:40.027 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:40.027 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:40.027 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:40.027 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:40.027 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:40.027 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:40.027 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:40.027 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:32:40.027 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:40.027 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:40.027 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.028 [2024-07-14 10:41:24.833256] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:40.028 [2024-07-14 10:41:24.833277] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.028 [2024-07-14 10:41:24.837351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:40.028 [2024-07-14 10:41:24.837369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.028 [2024-07-14 10:41:24.837379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:40.028 [2024-07-14 10:41:24.837386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.028 [2024-07-14 10:41:24.837394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:40.028 [2024-07-14 10:41:24.837401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.028 [2024-07-14 10:41:24.837408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:40.028 [2024-07-14 10:41:24.837415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.028 [2024-07-14 10:41:24.837422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183ba90 is same with the state(5) to be set 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:40.028 [2024-07-14 10:41:24.847365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183ba90 (9): Bad file descriptor 00:32:40.028 [2024-07-14 10:41:24.857403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:40.028 [2024-07-14 10:41:24.857594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.028 [2024-07-14 10:41:24.857607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x183ba90 with addr=10.0.0.2, port=4420 00:32:40.028 [2024-07-14 10:41:24.857615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183ba90 is same with the state(5) to be set 00:32:40.028 [2024-07-14 10:41:24.857627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183ba90 (9): Bad file descriptor 00:32:40.028 [2024-07-14 10:41:24.857639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:40.028 [2024-07-14 10:41:24.857647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:40.028 [2024-07-14 10:41:24.857655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:40.028 [2024-07-14 10:41:24.857665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.028 [2024-07-14 10:41:24.867456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:40.028 [2024-07-14 10:41:24.867623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.028 [2024-07-14 10:41:24.867634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x183ba90 with addr=10.0.0.2, port=4420 00:32:40.028 [2024-07-14 10:41:24.867642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183ba90 is same with the state(5) to be set 00:32:40.028 [2024-07-14 10:41:24.867652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183ba90 (9): Bad file descriptor 00:32:40.028 [2024-07-14 10:41:24.867662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:40.028 [2024-07-14 10:41:24.867669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:40.028 [2024-07-14 10:41:24.867675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:40.028 [2024-07-14 10:41:24.867685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.028 [2024-07-14 10:41:24.877509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:40.028 [2024-07-14 10:41:24.877808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.028 [2024-07-14 10:41:24.877822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x183ba90 with addr=10.0.0.2, port=4420 00:32:40.028 [2024-07-14 10:41:24.877829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183ba90 is same with the state(5) to be set 00:32:40.028 [2024-07-14 10:41:24.877840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183ba90 (9): Bad file descriptor 00:32:40.028 [2024-07-14 10:41:24.877850] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:40.028 [2024-07-14 10:41:24.877856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:40.028 [2024-07-14 10:41:24.877863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:40.028 [2024-07-14 10:41:24.877873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:40.028 [2024-07-14 10:41:24.887562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:40.028 [2024-07-14 10:41:24.887742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.028 [2024-07-14 10:41:24.887755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x183ba90 with addr=10.0.0.2, port=4420 00:32:40.028 [2024-07-14 10:41:24.887762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183ba90 is same with the state(5) to be set 00:32:40.028 [2024-07-14 10:41:24.887772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183ba90 (9): Bad file descriptor 00:32:40.028 [2024-07-14 10:41:24.887782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:40.028 [2024-07-14 10:41:24.887789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:40.028 [2024-07-14 10:41:24.887796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:40.028 [2024-07-14 10:41:24.887805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.028 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:40.028 [2024-07-14 10:41:24.897615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:40.028 [2024-07-14 10:41:24.897739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.028 [2024-07-14 10:41:24.897751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x183ba90 with addr=10.0.0.2, port=4420 00:32:40.028 [2024-07-14 10:41:24.897758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183ba90 is same with the state(5) to be set 00:32:40.028 [2024-07-14 10:41:24.897769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183ba90 (9): Bad file descriptor 00:32:40.028 [2024-07-14 10:41:24.897778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:40.028 [2024-07-14 10:41:24.897785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:40.028 [2024-07-14 10:41:24.897792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:40.028 [2024-07-14 10:41:24.897801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.028 [2024-07-14 10:41:24.907669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:40.028 [2024-07-14 10:41:24.907833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.028 [2024-07-14 10:41:24.907844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x183ba90 with addr=10.0.0.2, port=4420 00:32:40.028 [2024-07-14 10:41:24.907851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183ba90 is same with the state(5) to be set 00:32:40.028 [2024-07-14 10:41:24.907865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183ba90 (9): Bad file descriptor 00:32:40.028 [2024-07-14 10:41:24.907874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:40.028 [2024-07-14 10:41:24.907880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:40.028 [2024-07-14 10:41:24.907887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:40.028 [2024-07-14 10:41:24.907896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.029 [2024-07-14 10:41:24.917719] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:40.029 [2024-07-14 10:41:24.917889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.029 [2024-07-14 10:41:24.917900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x183ba90 with addr=10.0.0.2, port=4420 00:32:40.029 [2024-07-14 10:41:24.917907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183ba90 is same with the state(5) to be set 00:32:40.029 [2024-07-14 10:41:24.917916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183ba90 (9): Bad file descriptor 00:32:40.029 [2024-07-14 10:41:24.917926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:40.029 [2024-07-14 10:41:24.917932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:40.029 [2024-07-14 10:41:24.917938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:40.029 [2024-07-14 10:41:24.917947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.029 [2024-07-14 10:41:24.919534] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:40.029 [2024-07-14 10:41:24.919550] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.029 10:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.029 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.288 10:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.668 [2024-07-14 10:41:26.248744] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:41.668 [2024-07-14 10:41:26.248761] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:41.668 [2024-07-14 10:41:26.248811] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:41.668 [2024-07-14 10:41:26.335036] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:41.668 [2024-07-14 10:41:26.602787] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:41.668 [2024-07-14 10:41:26.602815] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:41.668 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.668 10:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:41.668 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:32:41.668 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:41.668 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:41.668 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:41.668 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:41.668 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:41.668 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:41.668 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.668 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.668 request: 00:32:41.668 { 00:32:41.668 "name": "nvme", 00:32:41.668 "trtype": "tcp", 00:32:41.668 "traddr": "10.0.0.2", 00:32:41.668 "adrfam": "ipv4", 00:32:41.668 "trsvcid": "8009", 00:32:41.668 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:41.668 "wait_for_attach": true, 00:32:41.668 "method": "bdev_nvme_start_discovery", 00:32:41.668 "req_id": 1 00:32:41.668 } 00:32:41.668 Got JSON-RPC error response 00:32:41.668 response: 00:32:41.668 { 00:32:41.668 "code": -17, 00:32:41.668 "message": "File exists" 00:32:41.668 } 00:32:41.668 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:41.668 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:32:41.668 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:41.668 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:41.668 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:41.668 10:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:41.668 10:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:41.668 10:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:41.668 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.668 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.668 10:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:41.668 10:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:41.668 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.927 10:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.928 request: 00:32:41.928 { 00:32:41.928 "name": "nvme_second", 00:32:41.928 "trtype": "tcp", 00:32:41.928 "traddr": "10.0.0.2", 00:32:41.928 "adrfam": "ipv4", 00:32:41.928 "trsvcid": "8009", 00:32:41.928 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:41.928 "wait_for_attach": true, 00:32:41.928 "method": "bdev_nvme_start_discovery", 00:32:41.928 "req_id": 1 00:32:41.928 } 00:32:41.928 Got JSON-RPC error response 00:32:41.928 response: 00:32:41.928 { 00:32:41.928 "code": -17, 00:32:41.928 "message": "File exists" 00:32:41.928 } 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.928 10:41:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.307 [2024-07-14 10:41:27.847931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.307 [2024-07-14 10:41:27.847960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1879970 with addr=10.0.0.2, port=8010 00:32:43.308 [2024-07-14 10:41:27.847972] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:43.308 [2024-07-14 10:41:27.847994] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:43.308 [2024-07-14 10:41:27.848001] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:43.876 [2024-07-14 10:41:28.850441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.876 [2024-07-14 10:41:28.850465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x183a030 with addr=10.0.0.2, port=8010 00:32:43.876 [2024-07-14 10:41:28.850475] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:43.876 [2024-07-14 10:41:28.850481] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:43.876 [2024-07-14 10:41:28.850487] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:45.254 [2024-07-14 10:41:29.852628] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:45.254 request: 00:32:45.254 { 00:32:45.254 "name": "nvme_second", 00:32:45.254 "trtype": "tcp", 00:32:45.254 "traddr": "10.0.0.2", 00:32:45.254 "adrfam": "ipv4", 00:32:45.254 "trsvcid": "8010", 00:32:45.254 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:45.254 "wait_for_attach": false, 00:32:45.254 "attach_timeout_ms": 3000, 00:32:45.254 "method": "bdev_nvme_start_discovery", 00:32:45.254 "req_id": 1 00:32:45.254 } 00:32:45.254 Got JSON-RPC error response 00:32:45.254 response: 00:32:45.254 { 00:32:45.254 "code": -110, 00:32:45.254 "message": "Connection timed out" 00:32:45.254 } 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2575776 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:45.254 rmmod nvme_tcp 00:32:45.254 rmmod nvme_fabrics 00:32:45.254 rmmod nvme_keyring 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2575636 ']' 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2575636 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 2575636 ']' 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 2575636 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:45.254 10:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2575636 00:32:45.254 10:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:45.254 10:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:45.254 10:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2575636' 00:32:45.254 killing process with pid 2575636 00:32:45.254 10:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 2575636 00:32:45.254 10:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 2575636 00:32:45.254 10:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:45.254 10:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:45.254 10:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:45.254 10:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:45.254 10:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:45.254 10:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.254 10:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:45.254 10:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:47.858 00:32:47.858 real 0m17.540s 00:32:47.858 user 0m21.780s 00:32:47.858 sys 0m5.646s 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.858 ************************************ 00:32:47.858 END TEST nvmf_host_discovery 00:32:47.858 ************************************ 00:32:47.858 10:41:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:47.858 10:41:32 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:47.858 10:41:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:47.858 10:41:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:47.858 10:41:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:47.858 ************************************ 00:32:47.858 START TEST nvmf_host_multipath_status 00:32:47.858 ************************************ 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:47.858 * Looking for test storage... 00:32:47.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:32:47.858 10:41:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:53.130 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:53.130 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:53.131 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:53.131 Found net devices under 0000:86:00.0: cvl_0_0 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:53.131 Found net devices under 0000:86:00.1: cvl_0_1 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:53.131 10:41:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:53.131 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:53.131 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:53.131 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:53.131 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:53.131 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:53.391 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:53.391 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:53.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:53.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:32:53.391 00:32:53.391 --- 10.0.0.2 ping statistics --- 00:32:53.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.391 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:32:53.391 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:53.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:53.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:32:53.391 00:32:53.391 --- 10.0.0.1 ping statistics --- 00:32:53.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.391 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:32:53.391 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:53.391 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:32:53.391 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:53.391 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:53.391 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:53.391 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:53.391 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:53.391 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:53.391 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:53.391 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:53.391 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:53.391 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:53.391 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:53.391 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2580847 00:32:53.391 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2580847 00:32:53.391 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:53.391 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2580847 ']' 00:32:53.391 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:53.391 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:53.391 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:53.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:53.391 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:53.391 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:53.391 [2024-07-14 10:41:38.223504] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:32:53.391 [2024-07-14 10:41:38.223547] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:53.391 EAL: No free 2048 kB hugepages reported on node 1 00:32:53.391 [2024-07-14 10:41:38.280697] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:53.391 [2024-07-14 10:41:38.321891] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:53.391 [2024-07-14 10:41:38.321931] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:53.391 [2024-07-14 10:41:38.321938] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:53.391 [2024-07-14 10:41:38.321944] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:53.391 [2024-07-14 10:41:38.321949] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:53.391 [2024-07-14 10:41:38.322025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:53.391 [2024-07-14 10:41:38.322025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:53.650 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:53.650 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:32:53.650 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:53.650 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:53.650 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:53.650 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:53.650 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2580847 00:32:53.650 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:53.650 [2024-07-14 10:41:38.608138] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:53.909 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:53.909 Malloc0 00:32:53.909 10:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:54.169 10:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:54.428 10:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:54.428 [2024-07-14 10:41:39.363113] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:54.428 10:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:54.691 [2024-07-14 10:41:39.551543] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:54.691 10:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2581092 00:32:54.691 10:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:54.691 10:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:54.691 10:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2581092 /var/tmp/bdevperf.sock 00:32:54.691 10:41:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2581092 ']' 00:32:54.691 10:41:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:54.691 10:41:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:54.691 10:41:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:54.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:54.691 10:41:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:54.691 10:41:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:54.949 10:41:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:54.949 10:41:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:32:54.949 10:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:55.213 10:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:32:55.471 Nvme0n1 00:32:55.471 10:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:56.037 Nvme0n1 00:32:56.037 10:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:56.037 10:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:57.939 10:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:57.939 10:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:58.198 10:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:58.457 10:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:59.391 10:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:59.391 10:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:59.391 10:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.391 10:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:59.650 10:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.650 10:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:59.650 10:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.650 10:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:59.650 10:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:59.650 10:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:59.650 10:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:59.650 10:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.908 10:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.908 10:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:59.908 10:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.908 10:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:00.166 10:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.166 10:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:00.166 10:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.166 10:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:00.425 10:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.425 10:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:00.425 10:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.425 10:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:00.425 10:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.425 10:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:00.425 10:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:00.684 10:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:00.944 10:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:01.879 10:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:01.879 10:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:01.879 10:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.879 10:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:02.137 10:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:02.137 10:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:02.137 10:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.137 10:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:02.395 10:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.395 10:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:02.395 10:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.395 10:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:02.395 10:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.395 10:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:02.395 10:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:02.395 10:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.653 10:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.653 10:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:02.653 10:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.653 10:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:02.911 10:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.912 10:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:02.912 10:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.912 10:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:03.170 10:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.170 10:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:03.170 10:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:03.170 10:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:03.429 10:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:04.366 10:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:04.366 10:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:04.366 10:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.366 10:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:04.658 10:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.658 10:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:04.658 10:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.658 10:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:04.917 10:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:04.917 10:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:04.917 10:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:04.917 10:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.917 10:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.917 10:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:04.917 10:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.917 10:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:05.175 10:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.175 10:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:05.175 10:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.175 10:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:05.434 10:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.434 10:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:05.434 10:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:05.434 10:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.694 10:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.694 10:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:05.694 10:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:05.953 10:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:05.953 10:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:06.889 10:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:06.889 10:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:06.889 10:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.889 10:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:07.148 10:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.148 10:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:07.148 10:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.148 10:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:07.407 10:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:07.407 10:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:07.407 10:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.407 10:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:07.666 10:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.666 10:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:07.666 10:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.666 10:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:07.666 10:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.666 10:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:07.666 10:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.666 10:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:07.925 10:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.925 10:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:07.925 10:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:07.925 10:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.184 10:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:08.184 10:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:08.184 10:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:08.442 10:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:08.442 10:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:09.820 10:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:09.820 10:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:09.820 10:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.820 10:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:09.820 10:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:09.820 10:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:09.820 10:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.820 10:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:09.820 10:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:09.820 10:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:09.820 10:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.820 10:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:10.078 10:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.078 10:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:10.078 10:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.078 10:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:10.337 10:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.337 10:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:10.337 10:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.337 10:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:10.337 10:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:10.337 10:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:10.337 10:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.337 10:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:10.596 10:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:10.596 10:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:10.596 10:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:10.855 10:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:11.114 10:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:12.053 10:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:12.053 10:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:12.053 10:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.053 10:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:12.312 10:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:12.312 10:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:12.312 10:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.312 10:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:12.312 10:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.312 10:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:12.312 10:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.312 10:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:12.571 10:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.571 10:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:12.571 10:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.571 10:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:12.830 10:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.830 10:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:12.830 10:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.830 10:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:13.090 10:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:13.090 10:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:13.090 10:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.090 10:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:13.090 10:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.090 10:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:13.348 10:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:13.348 10:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:13.607 10:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:13.608 10:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:14.984 10:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:14.984 10:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:14.984 10:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.984 10:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:14.984 10:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.984 10:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:14.984 10:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.984 10:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:15.242 10:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.242 10:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:15.242 10:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.242 10:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:15.242 10:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.242 10:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:15.242 10:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:15.242 10:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.499 10:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.499 10:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:15.499 10:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.499 10:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:15.756 10:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.756 10:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:15.756 10:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.756 10:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:16.015 10:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.015 10:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:16.015 10:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:16.015 10:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:16.273 10:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:17.205 10:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:17.205 10:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:17.205 10:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.205 10:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:17.462 10:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:17.462 10:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:17.462 10:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.462 10:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:17.720 10:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.720 10:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:17.720 10:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.720 10:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:17.720 10:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.720 10:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:17.720 10:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.720 10:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:17.978 10:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.978 10:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:17.978 10:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:17.978 10:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.237 10:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.237 10:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:18.237 10:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.237 10:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:18.494 10:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.494 10:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:18.494 10:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:18.494 10:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:18.750 10:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:19.684 10:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:19.684 10:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:19.684 10:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.684 10:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:19.942 10:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.942 10:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:19.942 10:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.942 10:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:20.222 10:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.222 10:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:20.222 10:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.222 10:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:20.480 10:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.480 10:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:20.480 10:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.480 10:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:20.480 10:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.480 10:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:20.480 10:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.480 10:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:20.739 10:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.739 10:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:20.739 10:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.739 10:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:20.997 10:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.997 10:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:20.997 10:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:21.265 10:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:21.265 10:42:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:22.641 10:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:22.641 10:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:22.641 10:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.641 10:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:22.641 10:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.641 10:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:22.641 10:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.641 10:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:22.641 10:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:22.641 10:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:22.641 10:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.641 10:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:22.919 10:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.919 10:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:22.919 10:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.919 10:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:23.177 10:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.177 10:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:23.177 10:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.177 10:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:23.436 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.436 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:23.436 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.436 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:23.436 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:23.436 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2581092 00:33:23.436 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2581092 ']' 00:33:23.436 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2581092 00:33:23.436 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:33:23.436 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:23.436 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2581092 00:33:23.436 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:33:23.436 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:33:23.436 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2581092' 00:33:23.436 killing process with pid 2581092 00:33:23.436 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2581092 00:33:23.436 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2581092 00:33:23.728 Connection closed with partial response: 00:33:23.728 00:33:23.728 00:33:23.728 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2581092 00:33:23.728 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:23.728 [2024-07-14 10:41:39.625261] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:23.728 [2024-07-14 10:41:39.625314] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2581092 ] 00:33:23.728 EAL: No free 2048 kB hugepages reported on node 1 00:33:23.728 [2024-07-14 10:41:39.692591] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:23.728 [2024-07-14 10:41:39.732020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:23.728 Running I/O for 90 seconds... 00:33:23.728 [2024-07-14 10:41:53.159727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.728 [2024-07-14 10:41:53.159763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:23.728 [2024-07-14 10:41:53.159784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.728 [2024-07-14 10:41:53.159792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:23.728 [2024-07-14 10:41:53.159805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.728 [2024-07-14 10:41:53.159812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:23.728 [2024-07-14 10:41:53.159824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.728 [2024-07-14 10:41:53.159831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:23.728 [2024-07-14 10:41:53.159843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.728 [2024-07-14 10:41:53.159850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:23.728 [2024-07-14 10:41:53.159862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.728 [2024-07-14 10:41:53.159868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:23.728 [2024-07-14 10:41:53.159881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.728 [2024-07-14 10:41:53.159887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:23.728 [2024-07-14 10:41:53.159899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.728 [2024-07-14 10:41:53.159906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:23.728 [2024-07-14 10:41:53.159918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.728 [2024-07-14 10:41:53.159924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:23.728 [2024-07-14 10:41:53.159936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.159942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.159955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.159966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.159980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.159987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.159999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.160982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.160994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.161001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.161013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.161020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.161032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.161039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.161050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.161057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.161069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.161078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.161090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.161097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.161109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.161116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:23.729 [2024-07-14 10:41:53.161128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.729 [2024-07-14 10:41:53.161135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.730 [2024-07-14 10:41:53.161191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.730 [2024-07-14 10:41:53.161211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.730 [2024-07-14 10:41:53.161865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:23.730 [2024-07-14 10:41:53.161877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.161884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.162878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.731 [2024-07-14 10:41:53.162896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.731 [2024-07-14 10:41:53.162916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.731 [2024-07-14 10:41:53.162935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.731 [2024-07-14 10:41:53.162953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.731 [2024-07-14 10:41:53.162972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.162985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.731 [2024-07-14 10:41:53.162992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.163004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.731 [2024-07-14 10:41:53.163010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.163023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.163029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.163041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.163047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.163061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.163067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:23.731 [2024-07-14 10:41:53.163079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.731 [2024-07-14 10:41:53.163086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.163098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.163104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.163116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.163122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.163136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.163142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.163154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.163161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.163173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.163179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.163191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.163198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.163598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.163610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.163625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.163631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.163644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.163650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.163662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.163669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.163682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.163689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.163701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.163708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.163720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.163726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.163738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.163744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.163758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.163770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.163782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.163788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.163801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.163807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.163819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.163826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.163838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.163844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.163856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.163862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.163875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.163881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.163893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.163899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.163911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.163918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.163930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.163936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.163948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.163954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.163966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.163973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.163987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.163994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.164007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.164013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.164025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.164031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.164043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.164050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.164063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.164069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.164081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.164088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.164100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.164106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.164118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.164125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:23.732 [2024-07-14 10:41:53.164137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.732 [2024-07-14 10:41:53.164144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.164156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.164162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.164174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.164180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.164193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.164199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.164499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.164510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.164527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.164534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.164546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.164552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.164564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.164570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.164584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.164591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.164603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.164609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.164621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.164627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.164640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.164646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.164659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.164666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.174122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.174134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.174150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.174158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.174173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.174181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.174196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.733 [2024-07-14 10:41:53.174204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.174221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.733 [2024-07-14 10:41:53.174233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.174248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.174256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.174271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.174279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.174294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.174301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.174316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.174325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.174339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.174347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.174362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.174370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.174386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.174394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.174409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.174417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.174431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.174439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.174454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.174462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.174477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.174485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.174500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.174510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.174525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.174532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.174547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.174555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.174569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.174577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.174592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.174600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.174614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.174622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.174637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.174645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.174659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.174667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.733 [2024-07-14 10:41:53.174682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.733 [2024-07-14 10:41:53.174690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.175984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.175993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:23.734 [2024-07-14 10:41:53.176007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.734 [2024-07-14 10:41:53.176015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.735 [2024-07-14 10:41:53.176152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.735 [2024-07-14 10:41:53.176175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.735 [2024-07-14 10:41:53.176197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.735 [2024-07-14 10:41:53.176220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.735 [2024-07-14 10:41:53.176249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.735 [2024-07-14 10:41:53.176271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.735 [2024-07-14 10:41:53.176294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.735 [2024-07-14 10:41:53.176910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:23.735 [2024-07-14 10:41:53.176925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.176932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.176947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.176955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.176969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.176977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.176992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.176999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.177014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.177022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.177037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.177044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.177059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.177067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.177081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.177090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.177105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.177113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.177128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.177135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.177150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.177159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.177174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.177182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.177196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.177204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.177219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.177231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.178170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.178187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.178205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.178214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.178235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.178243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.178258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.178266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.178281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.178289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.178304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.178312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.178326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.178334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.178349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.178357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.178371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.178383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.178398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.178407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.178421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.178429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.178444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.178452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.178467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.178474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.178489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.736 [2024-07-14 10:41:53.178497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.178512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.736 [2024-07-14 10:41:53.178519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.178534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.178542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.178557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.178565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.178579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.178587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.178602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.178609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.178624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.178632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.178646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.178654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.178671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.178679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.178693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.178701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.178715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.178723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.178738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.178746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.178761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.178769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.178783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.178791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:23.736 [2024-07-14 10:41:53.178806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.736 [2024-07-14 10:41:53.178814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:23.737 [2024-07-14 10:41:53.178828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.737 [2024-07-14 10:41:53.178836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:23.737 [2024-07-14 10:41:53.178850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.737 [2024-07-14 10:41:53.178858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:23.737 [2024-07-14 10:41:53.178873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.737 [2024-07-14 10:41:53.178880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.737 [2024-07-14 10:41:53.178895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.737 [2024-07-14 10:41:53.178903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:23.737 [2024-07-14 10:41:53.178918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.737 [2024-07-14 10:41:53.178925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:23.737 [2024-07-14 10:41:53.178944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.737 [2024-07-14 10:41:53.178952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.737 [2024-07-14 10:41:53.179332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.737 [2024-07-14 10:41:53.179344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:23.737 [2024-07-14 10:41:53.179361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.737 [2024-07-14 10:41:53.179369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:23.737 [2024-07-14 10:41:53.179384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.737 [2024-07-14 10:41:53.179392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:23.737 [2024-07-14 10:41:53.179407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.737 [2024-07-14 10:41:53.179415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.737 [2024-07-14 10:41:53.179430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.737 [2024-07-14 10:41:53.179438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:23.737 [2024-07-14 10:41:53.179452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.737 [2024-07-14 10:41:53.179460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:23.737 [2024-07-14 10:41:53.179475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.737 [2024-07-14 10:41:53.179483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:23.737 [2024-07-14 10:41:53.179499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.737 [2024-07-14 10:41:53.179507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.179522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.179530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.179545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.179553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.179568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.179575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.179590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.179600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.179615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.179623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.179638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.179645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.179660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.179668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.179683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.179691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.179705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.179713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.179728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.179736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.179751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.179758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.179773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.179781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.179796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.179804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.179819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.179826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.179841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.179849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.179865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.179875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.179890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.179897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.179912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.179920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.179935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.179942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.179957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.179965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.179980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.179988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.180003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.180011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.180025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.180033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.180048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.180056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.180071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.180079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.185334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.185348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.185368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.185379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.185397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.185406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.185429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.185439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.185459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.185468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.185485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.185495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.185512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.185522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.185539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.185548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.185565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.185574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.185590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.185599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.185615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.185624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.185641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.185650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.185666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.738 [2024-07-14 10:41:53.185675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:23.738 [2024-07-14 10:41:53.185692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.738 [2024-07-14 10:41:53.185701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.185718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.739 [2024-07-14 10:41:53.185726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.185745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.739 [2024-07-14 10:41:53.185754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.185770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.739 [2024-07-14 10:41:53.185779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.185796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.739 [2024-07-14 10:41:53.185805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.185822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.739 [2024-07-14 10:41:53.185830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.185847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.739 [2024-07-14 10:41:53.185856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.185873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.185882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.186552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.186571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.186591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.186600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.186617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.186626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.186642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.186651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.186668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.186679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.186695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.186704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.186721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.186733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.186749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.186758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.186775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.186784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.186801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.186810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.186827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.186835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.186852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.186861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.186877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.186886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.186903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.186912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.186929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.186937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.186954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.186963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.186979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.186988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.187005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.187014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.187030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.187041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.187057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.187068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.187088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.187097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.187115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.187124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.187141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.187149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.187166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.187175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.187192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.187201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.187218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.187231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.187248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.187257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.187274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.187284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.187300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.187309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.187326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.187335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.187352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.187361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.187379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.187389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.187405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.187414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.187431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.739 [2024-07-14 10:41:53.187440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:23.739 [2024-07-14 10:41:53.187457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.187466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.187482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.187491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.187508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.187517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.187534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.187543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.187560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.187569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.187586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.187595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.187611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.187620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.187637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.187646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.187662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.187671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.187689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.187698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.187714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.187723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.187740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.187749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.187765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.187774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.187790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.187799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.187816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.187825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.187843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.187852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.187868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.187877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.187894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.187903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.187919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.187928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.187944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.740 [2024-07-14 10:41:53.187954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.187970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.740 [2024-07-14 10:41:53.187979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.187996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.188007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.188023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.188032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.188048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.188057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.188074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.188083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.188099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.188108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.188125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.188134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.188150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.188159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.188176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.188185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.188201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.188210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.188231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.188241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.188257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.188267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.188283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.188292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.188309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.188320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.188336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.188345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.188362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.188371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.188387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.188396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.188412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.188421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.188438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.188447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.189290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.189307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.189327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.740 [2024-07-14 10:41:53.189337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:23.740 [2024-07-14 10:41:53.189353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.189363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.189380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.189389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.189406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.189415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.189431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.189443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.189459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.189468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.189488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.189497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.189514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.189523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.189539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.189548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.189564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.189573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.189590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.189599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.189616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.189626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.189647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.189658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.189675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.189687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.189705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.189715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.189733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.189742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.189760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.189769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.189785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.189795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.189813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.189822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.189838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.189848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.189867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.189876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.189893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.189903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.189919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.189928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.189945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.189954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.189971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.189981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.189998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.190007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.190024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.190033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.190049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.190058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.190091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.190103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.190125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.190137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.190159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.190173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.190195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.190207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.190235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.190248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.190270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.190282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.190304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.190315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.190337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.190349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.190371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.190383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.190405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.190417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.190438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.190450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.190473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.190485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:23.741 [2024-07-14 10:41:53.190507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.741 [2024-07-14 10:41:53.190518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.190540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.742 [2024-07-14 10:41:53.190552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.190574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.742 [2024-07-14 10:41:53.190589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.190611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.742 [2024-07-14 10:41:53.190623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.190645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.742 [2024-07-14 10:41:53.190657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.190679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.742 [2024-07-14 10:41:53.190692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.190714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.742 [2024-07-14 10:41:53.190726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.190748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.742 [2024-07-14 10:41:53.190760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.190783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.742 [2024-07-14 10:41:53.190795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.190818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.742 [2024-07-14 10:41:53.190830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.190852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.742 [2024-07-14 10:41:53.190864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.190886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.742 [2024-07-14 10:41:53.190898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.190921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.742 [2024-07-14 10:41:53.190933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.191769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.742 [2024-07-14 10:41:53.191790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.191814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.742 [2024-07-14 10:41:53.191827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.191857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.742 [2024-07-14 10:41:53.191869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.191891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.742 [2024-07-14 10:41:53.191903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.191925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.742 [2024-07-14 10:41:53.191937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.191960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.742 [2024-07-14 10:41:53.191971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.191994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.742 [2024-07-14 10:41:53.192005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.192028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.742 [2024-07-14 10:41:53.192039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.192061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.742 [2024-07-14 10:41:53.192073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.192095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.742 [2024-07-14 10:41:53.192107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.192130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.742 [2024-07-14 10:41:53.192142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.192164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.742 [2024-07-14 10:41:53.192176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.192198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.742 [2024-07-14 10:41:53.192210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.192238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.742 [2024-07-14 10:41:53.192251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.192276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.742 [2024-07-14 10:41:53.192288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.192311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.742 [2024-07-14 10:41:53.192323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.192345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.742 [2024-07-14 10:41:53.192357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.192379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.742 [2024-07-14 10:41:53.192391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.192413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.742 [2024-07-14 10:41:53.192425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.192447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.742 [2024-07-14 10:41:53.192459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.192481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.742 [2024-07-14 10:41:53.192493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:23.742 [2024-07-14 10:41:53.192515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.742 [2024-07-14 10:41:53.192527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.192549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.192561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.192582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.192595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.192616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.192628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.192651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.192662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.192685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.192698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.192721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.192733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.192757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.192770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.192792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.192805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.192827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.192838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.192861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.192872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.192895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.192907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.192929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.192940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.192963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.192975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.192997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.193009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.193031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.193043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.193065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.193077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.193099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.193113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.193136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.193148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.193170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.193182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.193204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.193216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.193243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.193255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.193277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.193289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.193311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.193323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.193345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.193356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.193379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.193391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.193412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.193424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.193447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.193459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.193481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.193492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.193516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.193528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.193552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.193564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.193586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.193597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.193620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.193631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.193654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.743 [2024-07-14 10:41:53.193665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.193687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.743 [2024-07-14 10:41:53.193699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.193721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.193733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.193755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.193767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.193789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.193801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.193822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.193834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.193856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.193868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:23.743 [2024-07-14 10:41:53.193890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.743 [2024-07-14 10:41:53.193902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.193924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.193936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.193960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.193972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.193995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.194006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.194028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.194040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.194062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.194074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.194096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.194108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.194130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.194142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.194164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.194176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.194198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.194209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.194236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.194248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.194270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.194283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.195388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.195409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.195434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.195446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.195469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.195485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.195506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.195519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.195541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.195553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.195575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.195587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.195609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.195621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.195643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.195656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.195678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.195690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.195712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.195724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.195746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.195759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.195780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.195793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.195814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.195827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.195849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.195861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.195883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.195897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.195920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.195932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.195954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.195966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.195988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.196000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.196022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.196034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.196056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.196068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.196090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.196102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.196124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.196136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.196158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.196169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.196191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.196204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.196232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.196245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.196267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.196279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.196301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.196313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.196337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.196349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:23.744 [2024-07-14 10:41:53.196371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.744 [2024-07-14 10:41:53.196382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.196404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.196416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.196438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.196450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.196472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.196484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.196506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.196518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.196540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.196552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.196574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.196586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.196608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.196620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.196642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.196654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.196676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.196688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.196710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.196722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.196747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.196759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.196781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.196793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.196815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.196827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.196849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.196861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.196883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.196895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.196918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.196929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.196952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.196963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.196985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.196998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.197019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.197031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.197054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.745 [2024-07-14 10:41:53.197065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.197088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.745 [2024-07-14 10:41:53.197100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.197122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.745 [2024-07-14 10:41:53.197134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.197156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.745 [2024-07-14 10:41:53.197170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.197193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.745 [2024-07-14 10:41:53.197205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.197231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.745 [2024-07-14 10:41:53.197244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.197266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.745 [2024-07-14 10:41:53.197279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.197301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.197313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.197335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.197347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.197369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.197381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.197403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.197416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.197438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.197450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.197472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.197484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.197506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.197518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.198481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.198504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.198528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.198546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.198569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.198581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.198603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.198615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.198638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.198649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.198672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.745 [2024-07-14 10:41:53.198684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:23.745 [2024-07-14 10:41:53.198706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.198718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.198740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.198751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.198773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.198785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.198807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.198819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.198841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.198853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.198875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.198886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.198908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.198920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.198942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.198954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.198979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.198991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.199013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.199025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.199047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.199059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.199081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.199093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.199115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.199127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.199149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.199161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.199184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.199196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.199218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.199238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.199261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.199273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.199295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.199307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.199329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.199341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.199363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.199375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.199400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.199413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.199435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.199447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.199469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.199481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.199503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.199515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.199537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.199549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.199572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.199584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.199606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.199618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.199639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.199652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.199674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.199686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.199708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.199720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.199742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.199754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.199776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.199788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.199810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.199824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.199846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.746 [2024-07-14 10:41:53.199858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:23.746 [2024-07-14 10:41:53.199880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.199892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.199914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.199927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.199948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.199960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.199983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.199994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.200016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.200032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.200055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.200067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.200089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.200101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.200124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.747 [2024-07-14 10:41:53.200136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.200158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.747 [2024-07-14 10:41:53.200170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.200192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.200204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.200231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.200248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.200271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.200283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.200305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.200317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.200339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.200351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.200373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.200384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.200407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.200419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.200442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.200455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.200477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.200489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.200511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.200523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.200545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.200557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.200580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.200592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.200614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.200640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.200655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.200663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.200679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.200687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.200703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.200711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.201371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.201386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.201403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.201411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.201425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.201433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.201448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.201456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.201470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.201478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.201492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.201500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.201514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.201522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.201536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.201544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.201558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.201566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.201580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.201588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.201605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.201612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.201627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.201634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.201649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.201656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.201671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.201679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.201693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.201700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.201715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.201722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.201736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.747 [2024-07-14 10:41:53.201744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:23.747 [2024-07-14 10:41:53.201758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.201766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.201781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.201788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.201803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.201811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.201825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.201832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.201847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.201855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.201869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.201878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.201892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.201900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.201915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.201923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.201937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.201945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.201959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.201967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.201981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.201989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.202012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.202034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.202056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.202079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.202101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.202123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.202148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.202171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.202194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.202216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.202243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.202265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.202287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.202309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.202331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.202353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.202375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.202397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.202419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.202444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.202466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.748 [2024-07-14 10:41:53.202487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.748 [2024-07-14 10:41:53.202509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.748 [2024-07-14 10:41:53.202531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.748 [2024-07-14 10:41:53.202553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.748 [2024-07-14 10:41:53.202575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.748 [2024-07-14 10:41:53.202597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.748 [2024-07-14 10:41:53.202619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.202641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:23.748 [2024-07-14 10:41:53.202655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.748 [2024-07-14 10:41:53.202663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.202677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.202684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.202701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.202708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.202723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.202731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.202745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.202753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.203370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.203385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.203401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.203409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.203423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.203431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.203445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.203454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.203468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.203476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.203490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.203497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.203512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.203520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.203534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.203542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.203556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.203565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.203579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.203590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.203605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.203613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.203628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.203636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.203651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.203659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.203673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.203681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.203695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.203703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.203717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.203725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.203739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.203747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.203761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.203769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.203783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.203791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.203805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.203813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.203827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.203835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.203849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.203859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.203874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.203883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.203896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.203905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.203919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.203927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.203942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.203950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.203965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.203973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.203987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.203994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.204008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.204016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.204031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.204039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.204053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.204061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.204076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.204084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.204098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.204106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.204120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.204128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.204144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.204152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.204166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.749 [2024-07-14 10:41:53.204173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:23.749 [2024-07-14 10:41:53.204187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.204195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.204210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.204217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.204236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.204244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.204258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.204266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.204280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.204287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.204302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.204309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.204324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.204331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.204345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.204353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.204367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.204375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.204389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.204397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.204413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.204421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.204436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.204443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.204458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.750 [2024-07-14 10:41:53.204466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.204480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.750 [2024-07-14 10:41:53.204488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.204502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.204510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.204525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.204532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.204546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.204554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.204568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.204576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.204590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.204598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.204612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.204620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.204634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.204641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.204655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.204663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.204677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.204687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.204701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.204709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.204723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.204731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.204745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.204753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.204767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.204775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.204789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.204797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.204811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.204819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.205463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.205476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.205492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.205500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.205514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.205522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.205536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.205544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.205558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.205566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.205581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.750 [2024-07-14 10:41:53.205591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:23.750 [2024-07-14 10:41:53.205605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.205613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.205627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.205635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.205650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.205657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.205672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.205680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.205694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.205702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.205716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.205724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.205738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.205746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.205760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.205768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.205782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.205790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.205805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.205813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.205828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.205836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.205851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.205858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.205874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.205883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.205897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.205905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.205919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.205927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.205942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.205949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.205964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.205971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.205985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.205993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.206007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.206015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.206029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.206037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.206051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.206059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.206073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.206081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.206095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.206103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.206117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.206125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.206141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.206149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.206163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.206171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.206185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.206193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.206209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.206217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.206236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.206244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.206258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.206266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.206281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.206290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.206305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.206313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.206327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.206335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.206349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.206357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.206371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.206378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.206393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.206401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.206415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.206427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.206441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.206448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.206463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.206470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.206484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.751 [2024-07-14 10:41:53.206492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:23.751 [2024-07-14 10:41:53.206506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.206514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.206528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.206536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.206550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.206558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.206572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.206580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.206595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.752 [2024-07-14 10:41:53.206603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.206618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.752 [2024-07-14 10:41:53.206625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.206640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.752 [2024-07-14 10:41:53.206647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.206662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.752 [2024-07-14 10:41:53.206669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.206684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.752 [2024-07-14 10:41:53.206693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.206707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.752 [2024-07-14 10:41:53.206715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.206729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.752 [2024-07-14 10:41:53.206737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.206751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.206759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.206773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.206781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.206795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.206803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.206817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.206825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.206840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.206847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.207464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.207478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.207494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.207502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.207517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.207525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.207539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.207547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.207561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.207570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.207587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.207596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.207611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.207618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.207633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.207640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.207655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.207664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.207678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.207686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.207700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.207708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.207722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.207730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.207744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.207753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.207767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.207775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.207789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.207798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.207813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.207822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.207836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.207844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.207860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.207868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.207882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.207890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.207905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.207912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.207926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.207934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.207948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.207956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.207970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.207978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.207992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.208000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.208014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.208022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:23.752 [2024-07-14 10:41:53.208035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.752 [2024-07-14 10:41:53.208044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.753 [2024-07-14 10:41:53.208584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.753 [2024-07-14 10:41:53.208608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.208919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.208927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.209585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.209601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.209621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.209629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.753 [2024-07-14 10:41:53.209643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.753 [2024-07-14 10:41:53.209651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.209666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.209673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.209688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.209695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.209710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.209717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.209732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.209740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.209754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.209761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.209775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.209783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.209797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.209805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.209819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.209827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.209841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.209849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.209864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.209872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.209887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.209897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.209911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.209921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.209935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.209942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.209957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.209964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.209979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.209986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.210001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.210008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.210022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.210030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.210045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.210052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.210067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.210074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.210088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.210096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.210110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.210118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.210132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.210140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.210155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.210164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.210178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.210186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.210200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.210208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.210222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.210235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.210249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.210256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.210271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.210278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.210293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.210300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.210315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.210322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.210336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.210344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.210359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.210367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.210381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.210389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.210404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.210411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.210427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.210436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.210453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.210461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.210475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.210483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.210498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.210506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.210520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.210528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.210542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.754 [2024-07-14 10:41:53.210550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:23.754 [2024-07-14 10:41:53.210564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.755 [2024-07-14 10:41:53.210572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.210586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.755 [2024-07-14 10:41:53.210594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.210608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.755 [2024-07-14 10:41:53.210616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.210630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.755 [2024-07-14 10:41:53.210637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.210652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.755 [2024-07-14 10:41:53.210660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.210674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.755 [2024-07-14 10:41:53.210682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.210696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.755 [2024-07-14 10:41:53.210704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.210719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.755 [2024-07-14 10:41:53.210727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.210742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.755 [2024-07-14 10:41:53.210749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.210764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.755 [2024-07-14 10:41:53.210771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.210786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.755 [2024-07-14 10:41:53.210794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.210808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.755 [2024-07-14 10:41:53.210815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.210830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.755 [2024-07-14 10:41:53.210837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.210852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.755 [2024-07-14 10:41:53.210859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.210873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.755 [2024-07-14 10:41:53.210881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.210895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.755 [2024-07-14 10:41:53.210903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.210917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.755 [2024-07-14 10:41:53.210925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.210939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.755 [2024-07-14 10:41:53.210947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.210962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.755 [2024-07-14 10:41:53.210969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.211569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.755 [2024-07-14 10:41:53.211584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.211599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.755 [2024-07-14 10:41:53.211608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.211620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.755 [2024-07-14 10:41:53.211626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.211638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.755 [2024-07-14 10:41:53.211645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.211658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.755 [2024-07-14 10:41:53.211666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.211678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.755 [2024-07-14 10:41:53.211685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.211697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.755 [2024-07-14 10:41:53.211703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.211716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.755 [2024-07-14 10:41:53.211722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.211734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.755 [2024-07-14 10:41:53.211741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.211753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.755 [2024-07-14 10:41:53.211759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.211772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.755 [2024-07-14 10:41:53.211778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.211790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.755 [2024-07-14 10:41:53.211797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.211809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.755 [2024-07-14 10:41:53.211817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.211829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.755 [2024-07-14 10:41:53.211836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.211847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.755 [2024-07-14 10:41:53.211854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:23.755 [2024-07-14 10:41:53.211866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.211873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.211886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.211893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.211905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.211911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.211923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.211931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.211945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.211952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.211963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.211970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.211982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.211989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.756 [2024-07-14 10:41:53.212541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.756 [2024-07-14 10:41:53.212560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:23.756 [2024-07-14 10:41:53.212609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.756 [2024-07-14 10:41:53.212616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.212628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.212634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.212646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.212653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.212665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.212671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.212683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.212690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.212702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.212708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.212720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.212727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.212739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.212747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.212759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.212766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.212778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.212785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.212797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.212804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:23.757 [2024-07-14 10:41:53.213964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.757 [2024-07-14 10:41:53.213972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.213985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.213992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.214011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.214033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.214052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.214071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.214090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.214109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.214127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.214146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.214165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.214183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.214202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.214221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.214246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.214266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.214285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.214303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.214322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.214342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.214361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.214380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.758 [2024-07-14 10:41:53.214398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.758 [2024-07-14 10:41:53.214418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.758 [2024-07-14 10:41:53.214439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.758 [2024-07-14 10:41:53.214458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.758 [2024-07-14 10:41:53.214477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.758 [2024-07-14 10:41:53.214496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.758 [2024-07-14 10:41:53.214517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.214536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.214554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.214567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.214573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.215086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.215097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.215111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.215118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.215130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.215137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.215150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.215157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.215169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.215175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.215187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.215194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.215206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.215212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.215231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.215239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.215254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.215262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.215274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.758 [2024-07-14 10:41:53.215281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:23.758 [2024-07-14 10:41:53.215293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.759 [2024-07-14 10:41:53.215929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:23.759 [2024-07-14 10:41:53.215940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.215947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.215961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.215967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.215979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.215986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.215998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.216005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.216017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.216023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.216036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.216043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.216056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.216062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.216075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.760 [2024-07-14 10:41:53.216081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.216093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.760 [2024-07-14 10:41:53.216100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.216112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.216119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.216130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.216137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.216150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.216156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.216169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.216175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.216679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.216691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.216705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.216712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.216725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.216732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.216745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.216752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.216764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.216771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.216783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.216790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.216802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.216809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.216821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.216828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.216937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.216947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.216960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.216967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.216979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.216986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.216998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.217007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.217019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.217029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.217041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.217048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.217060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.217067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.217079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.217087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.217099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.217107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.217120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.217127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.217140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.217147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.217159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.217166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.217178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.217185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.217197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.217203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.217215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.217222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.217239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.217246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.217259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.217267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:23.760 [2024-07-14 10:41:53.217440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.760 [2024-07-14 10:41:53.217449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.217462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.217469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.217481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.217488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.217500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.217507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.217519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.217526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.217538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.217545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.217557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.217564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.217576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.217583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.217594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.217601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.217613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.217620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.217632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.217638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.217650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.217657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.217673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.217680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.217692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.217698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.217711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.217717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.217729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.217736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.217748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.217755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.217767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.217774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.217786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.217792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.217804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.217811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.217823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.217830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.217842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.217849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.217861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.217867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.217879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.217886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.217899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.217906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.217918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.217925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.217938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.217944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.217956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.217963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.217975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.217981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.217994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.218000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.218012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.218019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.218031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.218038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.218050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.218056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.218068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.218075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.218087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.218093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.218106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.761 [2024-07-14 10:41:53.218112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.218124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.761 [2024-07-14 10:41:53.218132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.218151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.761 [2024-07-14 10:41:53.218158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.218170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.761 [2024-07-14 10:41:53.218177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.218189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.761 [2024-07-14 10:41:53.218196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:23.761 [2024-07-14 10:41:53.218208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.761 [2024-07-14 10:41:53.218214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.218231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.762 [2024-07-14 10:41:53.218239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.218252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.762 [2024-07-14 10:41:53.218259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.218271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.218278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.218290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.218297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.218309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.218316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.218328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.218335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.218347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.218354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.218366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.218374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.218386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.218393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.218405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.218414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.218426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.218433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.218446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.218452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.218894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.218905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.218918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.218925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.218938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.218944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.218957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.218964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.218976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.218984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.218996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.219002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.219015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.219022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.219034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.219041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.219056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.219063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.219075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.219082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.219094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.219101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.219113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.219120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.219133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.219140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.219152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.219159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.219171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.219178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.219190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.219197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.219209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.219216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.219233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.219240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.219252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.219259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.219271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.219278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.219291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.219298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.219310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.219317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.219329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.219336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.219348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.219355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.219367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.219374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.219386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.219392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.219405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.219411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.219424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.219430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.219442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.762 [2024-07-14 10:41:53.219449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:23.762 [2024-07-14 10:41:53.219461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.219468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.219480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.219487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.219499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.219506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.219518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.219526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.219538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.219545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.219557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.219564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.219576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.219583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.219595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.219601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.219613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.219620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.219633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.219640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.219652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.219659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.219671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.219677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.219689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.219696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.219708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.219715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.219727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.219734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.219746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.763 [2024-07-14 10:41:53.219754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.219766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.763 [2024-07-14 10:41:53.219772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.219785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.219791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.219803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.219810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.219822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.219829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.219841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.219848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.219995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.220005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.220029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.220036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.220052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.220058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.220074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.220081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.220096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.220102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.220118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.220124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.220140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.220146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.220163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.220170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.220185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.220192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.220207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.220214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.220234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.220241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.220256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.220263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.220278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.220285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.220301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.220307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:23.763 [2024-07-14 10:41:53.220326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.763 [2024-07-14 10:41:53.220333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.220348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.220355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.220370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.220377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.220392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.220399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.220414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.220421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.220438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.220445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.220461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.220467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.220483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.220489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.220504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.220511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.220526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.220533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.220548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.220555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.220570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.220577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.220592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.220599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.220614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.220621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.220636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.220643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.220658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.220665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.220680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.220687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.220703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.220711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.220799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.220808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.220826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.220833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.220850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.220857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.220874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.220880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.220898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.220904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.220921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.220928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.220945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.220951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.220969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.220975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.220992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.220999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.221016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.221022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.221039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.221045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.221062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.221070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.221088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.221094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.221111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.221118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.221135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.221141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.221159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.221165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.221182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.221189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.221206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.221212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.221234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.221241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.221258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.221264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.221282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.221288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.221305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.221312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.221333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.221340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:23.764 [2024-07-14 10:41:53.221357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.764 [2024-07-14 10:41:53.221364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:41:53.221431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:41:53.221439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:41:53.221458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:41:53.221465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:41:53.221484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:41:53.221490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:41:53.221509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:41:53.221515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:41:53.221534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:41:53.221540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:41:53.221559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.765 [2024-07-14 10:41:53.221565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:41:53.221586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.765 [2024-07-14 10:41:53.221592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:41:53.221612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.765 [2024-07-14 10:41:53.221618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:41:53.221638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.765 [2024-07-14 10:41:53.221644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:41:53.221664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.765 [2024-07-14 10:41:53.221671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:41:53.221690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.765 [2024-07-14 10:41:53.221697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:41:53.221716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.765 [2024-07-14 10:41:53.221723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:41:53.221744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:41:53.221751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:41:53.221770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:41:53.221777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:41:53.221796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:41:53.221802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:41:53.221821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:41:53.221827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:41:53.221846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:41:53.221852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:41:53.225778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:41:53.225787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:41:53.225807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:41:53.225813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:41:53.225832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:41:53.225839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:41:53.225858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:41:53.225864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:41:53.225884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:41:53.225890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:42:06.167476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:42:06.167518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:42:06.167550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:42:06.167559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:42:06.167853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:42:06.167874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:42:06.167889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:42:06.167896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:42:06.167909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:42:06.167916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:42:06.167928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:42:06.167934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:42:06.167947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:42:06.167954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:42:06.167966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:42:06.167973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:42:06.167985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:42:06.167992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:42:06.168004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:42:06.168010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:42:06.168023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:42:06.168029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:42:06.168041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:42:06.168048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:42:06.168060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:42:06.168067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:42:06.170491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:42:06.170511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:42:06.170527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:42:06.170540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:42:06.170553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.765 [2024-07-14 10:42:06.170559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:23.765 [2024-07-14 10:42:06.170572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.766 [2024-07-14 10:42:06.170578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:23.766 [2024-07-14 10:42:06.170591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.766 [2024-07-14 10:42:06.170597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:23.766 [2024-07-14 10:42:06.170610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.766 [2024-07-14 10:42:06.170616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:23.766 [2024-07-14 10:42:06.170628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.766 [2024-07-14 10:42:06.170634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:23.766 [2024-07-14 10:42:06.170647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.766 [2024-07-14 10:42:06.170653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:23.766 [2024-07-14 10:42:06.170666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.766 [2024-07-14 10:42:06.170672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:23.766 [2024-07-14 10:42:06.170684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.766 [2024-07-14 10:42:06.170690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:23.766 [2024-07-14 10:42:06.170702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.766 [2024-07-14 10:42:06.170709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:23.766 [2024-07-14 10:42:06.170721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.766 [2024-07-14 10:42:06.170728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:23.766 [2024-07-14 10:42:06.170740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.766 [2024-07-14 10:42:06.170746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:23.766 [2024-07-14 10:42:06.170758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.766 [2024-07-14 10:42:06.170765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:23.766 [2024-07-14 10:42:06.170778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.766 [2024-07-14 10:42:06.170785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:23.766 [2024-07-14 10:42:06.170797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.766 [2024-07-14 10:42:06.170803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:23.766 [2024-07-14 10:42:06.170816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.766 [2024-07-14 10:42:06.170822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:23.766 [2024-07-14 10:42:06.170834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.766 [2024-07-14 10:42:06.170841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:23.766 [2024-07-14 10:42:06.170854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.766 [2024-07-14 10:42:06.170860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:23.766 [2024-07-14 10:42:06.170873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.766 [2024-07-14 10:42:06.170880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:23.766 Received shutdown signal, test time was about 27.462699 seconds 00:33:23.766 00:33:23.766 Latency(us) 00:33:23.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:23.766 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:23.766 Verification LBA range: start 0x0 length 0x4000 00:33:23.766 Nvme0n1 : 27.46 10285.11 40.18 0.00 0.00 12425.14 527.14 3078254.41 00:33:23.766 =================================================================================================================== 00:33:23.766 Total : 10285.11 40.18 0.00 0.00 12425.14 527.14 3078254.41 00:33:23.766 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:24.025 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:24.025 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:24.025 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:24.025 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:24.025 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:33:24.025 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:24.025 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:33:24.025 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:24.025 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:24.025 rmmod nvme_tcp 00:33:24.025 rmmod nvme_fabrics 00:33:24.025 rmmod nvme_keyring 00:33:24.025 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:24.025 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:33:24.025 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:33:24.025 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2580847 ']' 00:33:24.025 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2580847 00:33:24.025 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2580847 ']' 00:33:24.025 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2580847 00:33:24.025 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:33:24.025 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:24.025 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2580847 00:33:24.025 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:24.025 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:24.025 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2580847' 00:33:24.025 killing process with pid 2580847 00:33:24.025 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2580847 00:33:24.025 10:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2580847 00:33:24.283 10:42:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:24.283 10:42:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:24.283 10:42:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:24.283 10:42:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:24.283 10:42:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:24.283 10:42:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:24.283 10:42:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:24.283 10:42:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:26.818 10:42:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:26.818 00:33:26.818 real 0m38.836s 00:33:26.818 user 1m44.878s 00:33:26.818 sys 0m10.641s 00:33:26.818 10:42:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:26.818 10:42:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:26.818 ************************************ 00:33:26.818 END TEST nvmf_host_multipath_status 00:33:26.818 ************************************ 00:33:26.818 10:42:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:26.818 10:42:11 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:26.818 10:42:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:26.818 10:42:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:26.818 10:42:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:26.818 ************************************ 00:33:26.818 START TEST nvmf_discovery_remove_ifc 00:33:26.818 ************************************ 00:33:26.818 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:26.818 * Looking for test storage... 00:33:26.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:26.818 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:26.818 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:26.818 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:26.818 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:26.818 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:26.818 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:33:26.819 10:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:32.095 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:32.096 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:32.096 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:32.096 Found net devices under 0000:86:00.0: cvl_0_0 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:32.096 Found net devices under 0000:86:00.1: cvl_0_1 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:32.096 10:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:32.096 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:32.096 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:32.096 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:32.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:32.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:33:32.096 00:33:32.096 --- 10.0.0.2 ping statistics --- 00:33:32.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:32.096 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:33:32.096 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:32.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:32.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:33:32.096 00:33:32.096 --- 10.0.0.1 ping statistics --- 00:33:32.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:32.096 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:33:32.096 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:32.096 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:33:32.096 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:32.096 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:32.096 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:32.096 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:32.096 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:32.096 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:32.096 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:32.356 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:32.356 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:32.356 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:32.356 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.356 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2589904 00:33:32.356 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2589904 00:33:32.356 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:32.356 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2589904 ']' 00:33:32.356 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:32.356 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:32.356 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:32.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:32.356 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:32.356 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.356 [2024-07-14 10:42:17.142937] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:32.356 [2024-07-14 10:42:17.142980] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:32.356 EAL: No free 2048 kB hugepages reported on node 1 00:33:32.356 [2024-07-14 10:42:17.211677] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.356 [2024-07-14 10:42:17.250126] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:32.356 [2024-07-14 10:42:17.250167] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:32.356 [2024-07-14 10:42:17.250173] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:32.356 [2024-07-14 10:42:17.250179] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:32.356 [2024-07-14 10:42:17.250184] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:32.356 [2024-07-14 10:42:17.250208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:33.294 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:33.294 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:33:33.294 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:33.294 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:33.294 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:33.294 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:33.294 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:33.294 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.294 10:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:33.294 [2024-07-14 10:42:17.992024] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:33.294 [2024-07-14 10:42:18.000164] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:33.294 null0 00:33:33.294 [2024-07-14 10:42:18.032160] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:33.294 10:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.294 10:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2589942 00:33:33.294 10:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:33.294 10:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2589942 /tmp/host.sock 00:33:33.294 10:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2589942 ']' 00:33:33.294 10:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:33:33.294 10:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:33.294 10:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:33.294 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:33.294 10:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:33.294 10:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:33.294 [2024-07-14 10:42:18.101114] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:33.294 [2024-07-14 10:42:18.101156] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2589942 ] 00:33:33.294 EAL: No free 2048 kB hugepages reported on node 1 00:33:33.294 [2024-07-14 10:42:18.169737] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:33.294 [2024-07-14 10:42:18.210670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.294 10:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:33.294 10:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:33:33.294 10:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:33.294 10:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:33.294 10:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.294 10:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:33.294 10:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.294 10:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:33.294 10:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.294 10:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:33.553 10:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.553 10:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:33.553 10:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.553 10:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:34.491 [2024-07-14 10:42:19.368405] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:34.491 [2024-07-14 10:42:19.368426] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:34.491 [2024-07-14 10:42:19.368440] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:34.491 [2024-07-14 10:42:19.455701] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:34.750 [2024-07-14 10:42:19.520433] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:34.750 [2024-07-14 10:42:19.520478] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:34.750 [2024-07-14 10:42:19.520499] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:34.750 [2024-07-14 10:42:19.520512] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:34.750 [2024-07-14 10:42:19.520530] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:34.750 10:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.750 10:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:34.750 10:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:34.750 10:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:34.750 10:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:34.750 10:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.750 10:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:34.751 10:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:34.751 10:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:34.751 [2024-07-14 10:42:19.527344] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xfb7780 was disconnected and freed. delete nvme_qpair. 00:33:34.751 10:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.751 10:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:34.751 10:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:34.751 10:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:34.751 10:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:34.751 10:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:34.751 10:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:34.751 10:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:34.751 10:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.751 10:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:34.751 10:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:34.751 10:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:34.751 10:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.751 10:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:34.751 10:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:36.126 10:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:36.126 10:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:36.126 10:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:36.126 10:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.126 10:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:36.126 10:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:36.126 10:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:36.126 10:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.126 10:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:36.126 10:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:37.087 10:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:37.087 10:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:37.087 10:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.087 10:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:37.087 10:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:37.087 10:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:37.087 10:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:37.087 10:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.087 10:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:37.087 10:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:38.022 10:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:38.022 10:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:38.022 10:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:38.022 10:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.022 10:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:38.022 10:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:38.022 10:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:38.022 10:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.022 10:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:38.022 10:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:38.955 10:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:38.955 10:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:38.955 10:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:38.955 10:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:38.955 10:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:38.955 10:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.955 10:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:38.955 10:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.955 10:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:38.955 10:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:40.328 10:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:40.328 10:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:40.328 10:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:40.328 10:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.328 10:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:40.328 10:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:40.328 10:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:40.328 10:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.328 [2024-07-14 10:42:24.961888] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:40.328 [2024-07-14 10:42:24.961928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.328 [2024-07-14 10:42:24.961939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.328 [2024-07-14 10:42:24.961949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.329 [2024-07-14 10:42:24.961956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.329 [2024-07-14 10:42:24.961963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.329 [2024-07-14 10:42:24.961974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.329 [2024-07-14 10:42:24.961981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.329 [2024-07-14 10:42:24.961988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.329 [2024-07-14 10:42:24.961995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.329 [2024-07-14 10:42:24.962001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.329 [2024-07-14 10:42:24.962008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7e110 is same with the state(5) to be set 00:33:40.329 [2024-07-14 10:42:24.971908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7e110 (9): Bad file descriptor 00:33:40.329 10:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:40.329 10:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:40.329 [2024-07-14 10:42:24.981947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:41.265 10:42:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:41.266 10:42:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:41.266 10:42:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:41.266 10:42:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.266 10:42:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:41.266 10:42:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:41.266 10:42:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:41.266 [2024-07-14 10:42:26.018321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:41.266 [2024-07-14 10:42:26.018398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7e110 with addr=10.0.0.2, port=4420 00:33:41.266 [2024-07-14 10:42:26.018427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7e110 is same with the state(5) to be set 00:33:41.266 [2024-07-14 10:42:26.018475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7e110 (9): Bad file descriptor 00:33:41.266 [2024-07-14 10:42:26.019415] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:41.266 [2024-07-14 10:42:26.019466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:41.266 [2024-07-14 10:42:26.019488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:41.266 [2024-07-14 10:42:26.019510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:41.266 [2024-07-14 10:42:26.019547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:41.266 [2024-07-14 10:42:26.019569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:41.266 10:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.266 10:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:41.266 10:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:42.200 [2024-07-14 10:42:27.022068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:42.200 [2024-07-14 10:42:27.022088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:42.200 [2024-07-14 10:42:27.022100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:42.201 [2024-07-14 10:42:27.022107] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:33:42.201 [2024-07-14 10:42:27.022118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:42.201 [2024-07-14 10:42:27.022136] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:42.201 [2024-07-14 10:42:27.022155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.201 [2024-07-14 10:42:27.022163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.201 [2024-07-14 10:42:27.022172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.201 [2024-07-14 10:42:27.022179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.201 [2024-07-14 10:42:27.022186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.201 [2024-07-14 10:42:27.022192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.201 [2024-07-14 10:42:27.022199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.201 [2024-07-14 10:42:27.022206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.201 [2024-07-14 10:42:27.022213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.201 [2024-07-14 10:42:27.022221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.201 [2024-07-14 10:42:27.022233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:33:42.201 [2024-07-14 10:42:27.022838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7d5d0 (9): Bad file descriptor 00:33:42.201 [2024-07-14 10:42:27.023850] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:42.201 [2024-07-14 10:42:27.023862] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:33:42.201 10:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:42.201 10:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:42.201 10:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.201 10:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:42.201 10:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:42.201 10:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:42.201 10:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:42.201 10:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.201 10:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:42.201 10:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:42.201 10:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:42.459 10:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:42.459 10:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:42.459 10:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:42.459 10:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:42.459 10:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.459 10:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:42.459 10:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:42.459 10:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:42.459 10:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.459 10:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:42.459 10:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:43.415 10:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:43.415 10:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:43.415 10:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:43.415 10:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.415 10:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:43.415 10:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.415 10:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:43.415 10:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.415 10:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:43.415 10:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:44.352 [2024-07-14 10:42:29.073717] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:44.352 [2024-07-14 10:42:29.073734] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:44.352 [2024-07-14 10:42:29.073746] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:44.352 [2024-07-14 10:42:29.201139] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:44.352 10:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:44.352 10:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:44.352 10:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:44.352 10:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.352 10:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:44.352 10:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:44.352 10:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:44.352 10:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.610 10:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:44.610 10:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:44.610 [2024-07-14 10:42:29.344439] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:44.610 [2024-07-14 10:42:29.344481] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:44.610 [2024-07-14 10:42:29.344498] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:44.610 [2024-07-14 10:42:29.344511] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:44.610 [2024-07-14 10:42:29.344518] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:44.610 [2024-07-14 10:42:29.352378] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xf8b7b0 was disconnected and freed. delete nvme_qpair. 00:33:45.547 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:45.547 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:45.547 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:45.547 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:45.547 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:45.547 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:45.547 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:45.547 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:45.547 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:45.547 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:45.547 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2589942 00:33:45.547 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2589942 ']' 00:33:45.547 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2589942 00:33:45.547 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:33:45.547 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:45.547 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2589942 00:33:45.547 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:45.547 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:45.547 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2589942' 00:33:45.547 killing process with pid 2589942 00:33:45.547 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2589942 00:33:45.547 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2589942 00:33:45.806 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:45.806 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:45.806 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:33:45.806 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:45.806 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:33:45.806 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:45.806 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:45.806 rmmod nvme_tcp 00:33:45.806 rmmod nvme_fabrics 00:33:45.806 rmmod nvme_keyring 00:33:45.806 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:45.806 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:33:45.806 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:33:45.806 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2589904 ']' 00:33:45.806 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2589904 00:33:45.806 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2589904 ']' 00:33:45.806 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2589904 00:33:45.806 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:33:45.806 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:45.806 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2589904 00:33:45.806 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:45.806 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:45.806 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2589904' 00:33:45.806 killing process with pid 2589904 00:33:45.806 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2589904 00:33:45.806 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2589904 00:33:46.066 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:46.066 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:46.066 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:46.066 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:46.066 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:46.066 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:46.066 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:46.066 10:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:48.598 10:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:48.599 00:33:48.599 real 0m21.706s 00:33:48.599 user 0m27.219s 00:33:48.599 sys 0m5.655s 00:33:48.599 10:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:48.599 10:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:48.599 ************************************ 00:33:48.599 END TEST nvmf_discovery_remove_ifc 00:33:48.599 ************************************ 00:33:48.599 10:42:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:48.599 10:42:32 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:48.599 10:42:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:48.599 10:42:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:48.599 10:42:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:48.599 ************************************ 00:33:48.599 START TEST nvmf_identify_kernel_target 00:33:48.599 ************************************ 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:48.599 * Looking for test storage... 00:33:48.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:33:48.599 10:42:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:53.870 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:53.870 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:53.870 Found net devices under 0000:86:00.0: cvl_0_0 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:53.870 Found net devices under 0000:86:00.1: cvl_0_1 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:53.870 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:54.129 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:54.129 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:54.129 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:54.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:54.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:33:54.129 00:33:54.129 --- 10.0.0.2 ping statistics --- 00:33:54.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.129 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:33:54.129 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:54.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:54.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:33:54.129 00:33:54.129 --- 10.0.0.1 ping statistics --- 00:33:54.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.129 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:33:54.129 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:54.129 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:33:54.129 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:54.129 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:54.129 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:54.129 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:54.129 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:54.129 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:54.129 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:54.129 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:54.129 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:54.129 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:33:54.129 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:54.130 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:54.130 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:54.130 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:54.130 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:54.130 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:54.130 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:54.130 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:54.130 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:54.130 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:54.130 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:54.130 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:54.130 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:54.130 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:54.130 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:54.130 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:54.130 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:33:54.130 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:54.130 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:54.130 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:54.130 10:42:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:56.663 Waiting for block devices as requested 00:33:56.663 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:33:56.923 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:56.923 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:57.183 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:57.183 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:57.183 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:57.183 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:57.441 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:57.441 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:57.441 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:57.700 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:57.700 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:57.700 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:57.700 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:57.958 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:57.958 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:57.958 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:58.250 10:42:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:58.250 10:42:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:58.250 10:42:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:58.250 10:42:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:33:58.250 10:42:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:58.250 10:42:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:33:58.250 10:42:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:58.250 10:42:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:58.250 10:42:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:58.250 No valid GPT data, bailing 00:33:58.250 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:58.250 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:33:58.250 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:33:58.250 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:58.250 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:33:58.250 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:58.250 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:58.250 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:58.250 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:58.250 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:33:58.250 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:33:58.250 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:33:58.250 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:33:58.250 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:33:58.250 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:33:58.250 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:33:58.250 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:58.250 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:33:58.250 00:33:58.250 Discovery Log Number of Records 2, Generation counter 2 00:33:58.250 =====Discovery Log Entry 0====== 00:33:58.250 trtype: tcp 00:33:58.250 adrfam: ipv4 00:33:58.250 subtype: current discovery subsystem 00:33:58.250 treq: not specified, sq flow control disable supported 00:33:58.250 portid: 1 00:33:58.250 trsvcid: 4420 00:33:58.250 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:58.250 traddr: 10.0.0.1 00:33:58.250 eflags: none 00:33:58.250 sectype: none 00:33:58.250 =====Discovery Log Entry 1====== 00:33:58.250 trtype: tcp 00:33:58.250 adrfam: ipv4 00:33:58.250 subtype: nvme subsystem 00:33:58.250 treq: not specified, sq flow control disable supported 00:33:58.250 portid: 1 00:33:58.250 trsvcid: 4420 00:33:58.250 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:58.250 traddr: 10.0.0.1 00:33:58.250 eflags: none 00:33:58.250 sectype: none 00:33:58.250 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:58.250 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:58.250 EAL: No free 2048 kB hugepages reported on node 1 00:33:58.508 ===================================================== 00:33:58.508 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:58.508 ===================================================== 00:33:58.508 Controller Capabilities/Features 00:33:58.508 ================================ 00:33:58.508 Vendor ID: 0000 00:33:58.508 Subsystem Vendor ID: 0000 00:33:58.508 Serial Number: 30756d3b6bc4ef9505e7 00:33:58.508 Model Number: Linux 00:33:58.508 Firmware Version: 6.7.0-68 00:33:58.508 Recommended Arb Burst: 0 00:33:58.508 IEEE OUI Identifier: 00 00 00 00:33:58.508 Multi-path I/O 00:33:58.508 May have multiple subsystem ports: No 00:33:58.509 May have multiple controllers: No 00:33:58.509 Associated with SR-IOV VF: No 00:33:58.509 Max Data Transfer Size: Unlimited 00:33:58.509 Max Number of Namespaces: 0 00:33:58.509 Max Number of I/O Queues: 1024 00:33:58.509 NVMe Specification Version (VS): 1.3 00:33:58.509 NVMe Specification Version (Identify): 1.3 00:33:58.509 Maximum Queue Entries: 1024 00:33:58.509 Contiguous Queues Required: No 00:33:58.509 Arbitration Mechanisms Supported 00:33:58.509 Weighted Round Robin: Not Supported 00:33:58.509 Vendor Specific: Not Supported 00:33:58.509 Reset Timeout: 7500 ms 00:33:58.509 Doorbell Stride: 4 bytes 00:33:58.509 NVM Subsystem Reset: Not Supported 00:33:58.509 Command Sets Supported 00:33:58.509 NVM Command Set: Supported 00:33:58.509 Boot Partition: Not Supported 00:33:58.509 Memory Page Size Minimum: 4096 bytes 00:33:58.509 Memory Page Size Maximum: 4096 bytes 00:33:58.509 Persistent Memory Region: Not Supported 00:33:58.509 Optional Asynchronous Events Supported 00:33:58.509 Namespace Attribute Notices: Not Supported 00:33:58.509 Firmware Activation Notices: Not Supported 00:33:58.509 ANA Change Notices: Not Supported 00:33:58.509 PLE Aggregate Log Change Notices: Not Supported 00:33:58.509 LBA Status Info Alert Notices: Not Supported 00:33:58.509 EGE Aggregate Log Change Notices: Not Supported 00:33:58.509 Normal NVM Subsystem Shutdown event: Not Supported 00:33:58.509 Zone Descriptor Change Notices: Not Supported 00:33:58.509 Discovery Log Change Notices: Supported 00:33:58.509 Controller Attributes 00:33:58.509 128-bit Host Identifier: Not Supported 00:33:58.509 Non-Operational Permissive Mode: Not Supported 00:33:58.509 NVM Sets: Not Supported 00:33:58.509 Read Recovery Levels: Not Supported 00:33:58.509 Endurance Groups: Not Supported 00:33:58.509 Predictable Latency Mode: Not Supported 00:33:58.509 Traffic Based Keep ALive: Not Supported 00:33:58.509 Namespace Granularity: Not Supported 00:33:58.509 SQ Associations: Not Supported 00:33:58.509 UUID List: Not Supported 00:33:58.509 Multi-Domain Subsystem: Not Supported 00:33:58.509 Fixed Capacity Management: Not Supported 00:33:58.509 Variable Capacity Management: Not Supported 00:33:58.509 Delete Endurance Group: Not Supported 00:33:58.509 Delete NVM Set: Not Supported 00:33:58.509 Extended LBA Formats Supported: Not Supported 00:33:58.509 Flexible Data Placement Supported: Not Supported 00:33:58.509 00:33:58.509 Controller Memory Buffer Support 00:33:58.509 ================================ 00:33:58.509 Supported: No 00:33:58.509 00:33:58.509 Persistent Memory Region Support 00:33:58.509 ================================ 00:33:58.509 Supported: No 00:33:58.509 00:33:58.509 Admin Command Set Attributes 00:33:58.509 ============================ 00:33:58.509 Security Send/Receive: Not Supported 00:33:58.509 Format NVM: Not Supported 00:33:58.509 Firmware Activate/Download: Not Supported 00:33:58.509 Namespace Management: Not Supported 00:33:58.509 Device Self-Test: Not Supported 00:33:58.509 Directives: Not Supported 00:33:58.509 NVMe-MI: Not Supported 00:33:58.509 Virtualization Management: Not Supported 00:33:58.509 Doorbell Buffer Config: Not Supported 00:33:58.509 Get LBA Status Capability: Not Supported 00:33:58.509 Command & Feature Lockdown Capability: Not Supported 00:33:58.509 Abort Command Limit: 1 00:33:58.509 Async Event Request Limit: 1 00:33:58.509 Number of Firmware Slots: N/A 00:33:58.509 Firmware Slot 1 Read-Only: N/A 00:33:58.509 Firmware Activation Without Reset: N/A 00:33:58.509 Multiple Update Detection Support: N/A 00:33:58.509 Firmware Update Granularity: No Information Provided 00:33:58.509 Per-Namespace SMART Log: No 00:33:58.509 Asymmetric Namespace Access Log Page: Not Supported 00:33:58.509 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:58.509 Command Effects Log Page: Not Supported 00:33:58.509 Get Log Page Extended Data: Supported 00:33:58.509 Telemetry Log Pages: Not Supported 00:33:58.509 Persistent Event Log Pages: Not Supported 00:33:58.509 Supported Log Pages Log Page: May Support 00:33:58.509 Commands Supported & Effects Log Page: Not Supported 00:33:58.509 Feature Identifiers & Effects Log Page:May Support 00:33:58.509 NVMe-MI Commands & Effects Log Page: May Support 00:33:58.509 Data Area 4 for Telemetry Log: Not Supported 00:33:58.509 Error Log Page Entries Supported: 1 00:33:58.509 Keep Alive: Not Supported 00:33:58.509 00:33:58.509 NVM Command Set Attributes 00:33:58.509 ========================== 00:33:58.509 Submission Queue Entry Size 00:33:58.509 Max: 1 00:33:58.509 Min: 1 00:33:58.509 Completion Queue Entry Size 00:33:58.509 Max: 1 00:33:58.509 Min: 1 00:33:58.509 Number of Namespaces: 0 00:33:58.509 Compare Command: Not Supported 00:33:58.509 Write Uncorrectable Command: Not Supported 00:33:58.509 Dataset Management Command: Not Supported 00:33:58.509 Write Zeroes Command: Not Supported 00:33:58.509 Set Features Save Field: Not Supported 00:33:58.509 Reservations: Not Supported 00:33:58.509 Timestamp: Not Supported 00:33:58.509 Copy: Not Supported 00:33:58.509 Volatile Write Cache: Not Present 00:33:58.509 Atomic Write Unit (Normal): 1 00:33:58.509 Atomic Write Unit (PFail): 1 00:33:58.509 Atomic Compare & Write Unit: 1 00:33:58.509 Fused Compare & Write: Not Supported 00:33:58.509 Scatter-Gather List 00:33:58.509 SGL Command Set: Supported 00:33:58.509 SGL Keyed: Not Supported 00:33:58.509 SGL Bit Bucket Descriptor: Not Supported 00:33:58.509 SGL Metadata Pointer: Not Supported 00:33:58.509 Oversized SGL: Not Supported 00:33:58.509 SGL Metadata Address: Not Supported 00:33:58.509 SGL Offset: Supported 00:33:58.509 Transport SGL Data Block: Not Supported 00:33:58.509 Replay Protected Memory Block: Not Supported 00:33:58.509 00:33:58.509 Firmware Slot Information 00:33:58.509 ========================= 00:33:58.509 Active slot: 0 00:33:58.509 00:33:58.509 00:33:58.509 Error Log 00:33:58.509 ========= 00:33:58.509 00:33:58.509 Active Namespaces 00:33:58.509 ================= 00:33:58.509 Discovery Log Page 00:33:58.509 ================== 00:33:58.509 Generation Counter: 2 00:33:58.509 Number of Records: 2 00:33:58.509 Record Format: 0 00:33:58.509 00:33:58.509 Discovery Log Entry 0 00:33:58.509 ---------------------- 00:33:58.509 Transport Type: 3 (TCP) 00:33:58.509 Address Family: 1 (IPv4) 00:33:58.509 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:58.509 Entry Flags: 00:33:58.509 Duplicate Returned Information: 0 00:33:58.509 Explicit Persistent Connection Support for Discovery: 0 00:33:58.509 Transport Requirements: 00:33:58.509 Secure Channel: Not Specified 00:33:58.509 Port ID: 1 (0x0001) 00:33:58.509 Controller ID: 65535 (0xffff) 00:33:58.509 Admin Max SQ Size: 32 00:33:58.509 Transport Service Identifier: 4420 00:33:58.509 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:58.509 Transport Address: 10.0.0.1 00:33:58.509 Discovery Log Entry 1 00:33:58.509 ---------------------- 00:33:58.509 Transport Type: 3 (TCP) 00:33:58.509 Address Family: 1 (IPv4) 00:33:58.509 Subsystem Type: 2 (NVM Subsystem) 00:33:58.509 Entry Flags: 00:33:58.509 Duplicate Returned Information: 0 00:33:58.509 Explicit Persistent Connection Support for Discovery: 0 00:33:58.509 Transport Requirements: 00:33:58.509 Secure Channel: Not Specified 00:33:58.509 Port ID: 1 (0x0001) 00:33:58.509 Controller ID: 65535 (0xffff) 00:33:58.509 Admin Max SQ Size: 32 00:33:58.509 Transport Service Identifier: 4420 00:33:58.509 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:58.509 Transport Address: 10.0.0.1 00:33:58.509 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:58.509 EAL: No free 2048 kB hugepages reported on node 1 00:33:58.509 get_feature(0x01) failed 00:33:58.509 get_feature(0x02) failed 00:33:58.509 get_feature(0x04) failed 00:33:58.509 ===================================================== 00:33:58.509 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:58.509 ===================================================== 00:33:58.509 Controller Capabilities/Features 00:33:58.509 ================================ 00:33:58.509 Vendor ID: 0000 00:33:58.509 Subsystem Vendor ID: 0000 00:33:58.509 Serial Number: e0af7b2dd866e2a32568 00:33:58.509 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:58.509 Firmware Version: 6.7.0-68 00:33:58.509 Recommended Arb Burst: 6 00:33:58.509 IEEE OUI Identifier: 00 00 00 00:33:58.509 Multi-path I/O 00:33:58.509 May have multiple subsystem ports: Yes 00:33:58.509 May have multiple controllers: Yes 00:33:58.509 Associated with SR-IOV VF: No 00:33:58.509 Max Data Transfer Size: Unlimited 00:33:58.509 Max Number of Namespaces: 1024 00:33:58.509 Max Number of I/O Queues: 128 00:33:58.509 NVMe Specification Version (VS): 1.3 00:33:58.509 NVMe Specification Version (Identify): 1.3 00:33:58.509 Maximum Queue Entries: 1024 00:33:58.509 Contiguous Queues Required: No 00:33:58.509 Arbitration Mechanisms Supported 00:33:58.509 Weighted Round Robin: Not Supported 00:33:58.509 Vendor Specific: Not Supported 00:33:58.509 Reset Timeout: 7500 ms 00:33:58.509 Doorbell Stride: 4 bytes 00:33:58.509 NVM Subsystem Reset: Not Supported 00:33:58.509 Command Sets Supported 00:33:58.509 NVM Command Set: Supported 00:33:58.509 Boot Partition: Not Supported 00:33:58.509 Memory Page Size Minimum: 4096 bytes 00:33:58.509 Memory Page Size Maximum: 4096 bytes 00:33:58.509 Persistent Memory Region: Not Supported 00:33:58.509 Optional Asynchronous Events Supported 00:33:58.509 Namespace Attribute Notices: Supported 00:33:58.509 Firmware Activation Notices: Not Supported 00:33:58.509 ANA Change Notices: Supported 00:33:58.509 PLE Aggregate Log Change Notices: Not Supported 00:33:58.509 LBA Status Info Alert Notices: Not Supported 00:33:58.509 EGE Aggregate Log Change Notices: Not Supported 00:33:58.509 Normal NVM Subsystem Shutdown event: Not Supported 00:33:58.509 Zone Descriptor Change Notices: Not Supported 00:33:58.509 Discovery Log Change Notices: Not Supported 00:33:58.509 Controller Attributes 00:33:58.509 128-bit Host Identifier: Supported 00:33:58.509 Non-Operational Permissive Mode: Not Supported 00:33:58.509 NVM Sets: Not Supported 00:33:58.509 Read Recovery Levels: Not Supported 00:33:58.509 Endurance Groups: Not Supported 00:33:58.509 Predictable Latency Mode: Not Supported 00:33:58.509 Traffic Based Keep ALive: Supported 00:33:58.509 Namespace Granularity: Not Supported 00:33:58.509 SQ Associations: Not Supported 00:33:58.509 UUID List: Not Supported 00:33:58.509 Multi-Domain Subsystem: Not Supported 00:33:58.509 Fixed Capacity Management: Not Supported 00:33:58.509 Variable Capacity Management: Not Supported 00:33:58.509 Delete Endurance Group: Not Supported 00:33:58.509 Delete NVM Set: Not Supported 00:33:58.509 Extended LBA Formats Supported: Not Supported 00:33:58.509 Flexible Data Placement Supported: Not Supported 00:33:58.509 00:33:58.509 Controller Memory Buffer Support 00:33:58.509 ================================ 00:33:58.509 Supported: No 00:33:58.509 00:33:58.509 Persistent Memory Region Support 00:33:58.509 ================================ 00:33:58.509 Supported: No 00:33:58.509 00:33:58.509 Admin Command Set Attributes 00:33:58.509 ============================ 00:33:58.509 Security Send/Receive: Not Supported 00:33:58.509 Format NVM: Not Supported 00:33:58.509 Firmware Activate/Download: Not Supported 00:33:58.509 Namespace Management: Not Supported 00:33:58.509 Device Self-Test: Not Supported 00:33:58.509 Directives: Not Supported 00:33:58.509 NVMe-MI: Not Supported 00:33:58.509 Virtualization Management: Not Supported 00:33:58.509 Doorbell Buffer Config: Not Supported 00:33:58.509 Get LBA Status Capability: Not Supported 00:33:58.509 Command & Feature Lockdown Capability: Not Supported 00:33:58.509 Abort Command Limit: 4 00:33:58.509 Async Event Request Limit: 4 00:33:58.509 Number of Firmware Slots: N/A 00:33:58.509 Firmware Slot 1 Read-Only: N/A 00:33:58.509 Firmware Activation Without Reset: N/A 00:33:58.509 Multiple Update Detection Support: N/A 00:33:58.509 Firmware Update Granularity: No Information Provided 00:33:58.509 Per-Namespace SMART Log: Yes 00:33:58.509 Asymmetric Namespace Access Log Page: Supported 00:33:58.509 ANA Transition Time : 10 sec 00:33:58.509 00:33:58.509 Asymmetric Namespace Access Capabilities 00:33:58.509 ANA Optimized State : Supported 00:33:58.509 ANA Non-Optimized State : Supported 00:33:58.509 ANA Inaccessible State : Supported 00:33:58.509 ANA Persistent Loss State : Supported 00:33:58.509 ANA Change State : Supported 00:33:58.509 ANAGRPID is not changed : No 00:33:58.509 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:58.509 00:33:58.509 ANA Group Identifier Maximum : 128 00:33:58.509 Number of ANA Group Identifiers : 128 00:33:58.509 Max Number of Allowed Namespaces : 1024 00:33:58.509 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:58.509 Command Effects Log Page: Supported 00:33:58.509 Get Log Page Extended Data: Supported 00:33:58.509 Telemetry Log Pages: Not Supported 00:33:58.509 Persistent Event Log Pages: Not Supported 00:33:58.509 Supported Log Pages Log Page: May Support 00:33:58.509 Commands Supported & Effects Log Page: Not Supported 00:33:58.509 Feature Identifiers & Effects Log Page:May Support 00:33:58.509 NVMe-MI Commands & Effects Log Page: May Support 00:33:58.509 Data Area 4 for Telemetry Log: Not Supported 00:33:58.509 Error Log Page Entries Supported: 128 00:33:58.509 Keep Alive: Supported 00:33:58.509 Keep Alive Granularity: 1000 ms 00:33:58.509 00:33:58.509 NVM Command Set Attributes 00:33:58.509 ========================== 00:33:58.509 Submission Queue Entry Size 00:33:58.509 Max: 64 00:33:58.509 Min: 64 00:33:58.509 Completion Queue Entry Size 00:33:58.509 Max: 16 00:33:58.509 Min: 16 00:33:58.509 Number of Namespaces: 1024 00:33:58.509 Compare Command: Not Supported 00:33:58.509 Write Uncorrectable Command: Not Supported 00:33:58.509 Dataset Management Command: Supported 00:33:58.509 Write Zeroes Command: Supported 00:33:58.509 Set Features Save Field: Not Supported 00:33:58.509 Reservations: Not Supported 00:33:58.509 Timestamp: Not Supported 00:33:58.509 Copy: Not Supported 00:33:58.509 Volatile Write Cache: Present 00:33:58.509 Atomic Write Unit (Normal): 1 00:33:58.509 Atomic Write Unit (PFail): 1 00:33:58.509 Atomic Compare & Write Unit: 1 00:33:58.509 Fused Compare & Write: Not Supported 00:33:58.509 Scatter-Gather List 00:33:58.509 SGL Command Set: Supported 00:33:58.509 SGL Keyed: Not Supported 00:33:58.509 SGL Bit Bucket Descriptor: Not Supported 00:33:58.509 SGL Metadata Pointer: Not Supported 00:33:58.509 Oversized SGL: Not Supported 00:33:58.509 SGL Metadata Address: Not Supported 00:33:58.509 SGL Offset: Supported 00:33:58.509 Transport SGL Data Block: Not Supported 00:33:58.509 Replay Protected Memory Block: Not Supported 00:33:58.509 00:33:58.509 Firmware Slot Information 00:33:58.509 ========================= 00:33:58.509 Active slot: 0 00:33:58.509 00:33:58.509 Asymmetric Namespace Access 00:33:58.509 =========================== 00:33:58.509 Change Count : 0 00:33:58.509 Number of ANA Group Descriptors : 1 00:33:58.509 ANA Group Descriptor : 0 00:33:58.509 ANA Group ID : 1 00:33:58.509 Number of NSID Values : 1 00:33:58.509 Change Count : 0 00:33:58.509 ANA State : 1 00:33:58.509 Namespace Identifier : 1 00:33:58.509 00:33:58.509 Commands Supported and Effects 00:33:58.509 ============================== 00:33:58.509 Admin Commands 00:33:58.509 -------------- 00:33:58.509 Get Log Page (02h): Supported 00:33:58.509 Identify (06h): Supported 00:33:58.509 Abort (08h): Supported 00:33:58.509 Set Features (09h): Supported 00:33:58.509 Get Features (0Ah): Supported 00:33:58.509 Asynchronous Event Request (0Ch): Supported 00:33:58.509 Keep Alive (18h): Supported 00:33:58.509 I/O Commands 00:33:58.509 ------------ 00:33:58.509 Flush (00h): Supported 00:33:58.509 Write (01h): Supported LBA-Change 00:33:58.509 Read (02h): Supported 00:33:58.509 Write Zeroes (08h): Supported LBA-Change 00:33:58.509 Dataset Management (09h): Supported 00:33:58.509 00:33:58.509 Error Log 00:33:58.509 ========= 00:33:58.509 Entry: 0 00:33:58.509 Error Count: 0x3 00:33:58.509 Submission Queue Id: 0x0 00:33:58.509 Command Id: 0x5 00:33:58.509 Phase Bit: 0 00:33:58.509 Status Code: 0x2 00:33:58.509 Status Code Type: 0x0 00:33:58.509 Do Not Retry: 1 00:33:58.509 Error Location: 0x28 00:33:58.509 LBA: 0x0 00:33:58.509 Namespace: 0x0 00:33:58.509 Vendor Log Page: 0x0 00:33:58.509 ----------- 00:33:58.509 Entry: 1 00:33:58.509 Error Count: 0x2 00:33:58.509 Submission Queue Id: 0x0 00:33:58.509 Command Id: 0x5 00:33:58.509 Phase Bit: 0 00:33:58.509 Status Code: 0x2 00:33:58.509 Status Code Type: 0x0 00:33:58.509 Do Not Retry: 1 00:33:58.509 Error Location: 0x28 00:33:58.509 LBA: 0x0 00:33:58.509 Namespace: 0x0 00:33:58.509 Vendor Log Page: 0x0 00:33:58.509 ----------- 00:33:58.509 Entry: 2 00:33:58.509 Error Count: 0x1 00:33:58.509 Submission Queue Id: 0x0 00:33:58.509 Command Id: 0x4 00:33:58.509 Phase Bit: 0 00:33:58.509 Status Code: 0x2 00:33:58.509 Status Code Type: 0x0 00:33:58.509 Do Not Retry: 1 00:33:58.509 Error Location: 0x28 00:33:58.509 LBA: 0x0 00:33:58.509 Namespace: 0x0 00:33:58.509 Vendor Log Page: 0x0 00:33:58.509 00:33:58.509 Number of Queues 00:33:58.509 ================ 00:33:58.509 Number of I/O Submission Queues: 128 00:33:58.510 Number of I/O Completion Queues: 128 00:33:58.510 00:33:58.510 ZNS Specific Controller Data 00:33:58.510 ============================ 00:33:58.510 Zone Append Size Limit: 0 00:33:58.510 00:33:58.510 00:33:58.510 Active Namespaces 00:33:58.510 ================= 00:33:58.510 get_feature(0x05) failed 00:33:58.510 Namespace ID:1 00:33:58.510 Command Set Identifier: NVM (00h) 00:33:58.510 Deallocate: Supported 00:33:58.510 Deallocated/Unwritten Error: Not Supported 00:33:58.510 Deallocated Read Value: Unknown 00:33:58.510 Deallocate in Write Zeroes: Not Supported 00:33:58.510 Deallocated Guard Field: 0xFFFF 00:33:58.510 Flush: Supported 00:33:58.510 Reservation: Not Supported 00:33:58.510 Namespace Sharing Capabilities: Multiple Controllers 00:33:58.510 Size (in LBAs): 1953525168 (931GiB) 00:33:58.510 Capacity (in LBAs): 1953525168 (931GiB) 00:33:58.510 Utilization (in LBAs): 1953525168 (931GiB) 00:33:58.510 UUID: 44a5f417-41f1-40db-98d8-9625b65c2acc 00:33:58.510 Thin Provisioning: Not Supported 00:33:58.510 Per-NS Atomic Units: Yes 00:33:58.510 Atomic Boundary Size (Normal): 0 00:33:58.510 Atomic Boundary Size (PFail): 0 00:33:58.510 Atomic Boundary Offset: 0 00:33:58.510 NGUID/EUI64 Never Reused: No 00:33:58.510 ANA group ID: 1 00:33:58.510 Namespace Write Protected: No 00:33:58.510 Number of LBA Formats: 1 00:33:58.510 Current LBA Format: LBA Format #00 00:33:58.510 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:58.510 00:33:58.510 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:58.510 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:58.510 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:33:58.510 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:58.510 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:33:58.510 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:58.510 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:58.510 rmmod nvme_tcp 00:33:58.510 rmmod nvme_fabrics 00:33:58.510 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:58.510 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:33:58.510 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:33:58.510 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:33:58.510 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:58.510 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:58.510 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:58.510 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:58.510 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:58.510 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:58.510 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:58.510 10:42:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:01.046 10:42:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:01.046 10:42:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:34:01.046 10:42:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:01.046 10:42:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:34:01.046 10:42:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:01.046 10:42:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:01.046 10:42:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:01.046 10:42:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:01.046 10:42:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:34:01.046 10:42:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:34:01.046 10:42:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:03.584 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:03.584 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:03.584 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:03.584 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:03.584 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:03.584 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:03.584 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:03.584 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:03.584 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:03.584 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:03.584 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:03.584 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:03.584 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:03.584 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:03.584 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:03.584 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:04.522 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:04.522 00:34:04.522 real 0m16.240s 00:34:04.522 user 0m4.023s 00:34:04.522 sys 0m8.562s 00:34:04.522 10:42:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:04.522 10:42:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:04.522 ************************************ 00:34:04.522 END TEST nvmf_identify_kernel_target 00:34:04.522 ************************************ 00:34:04.522 10:42:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:04.522 10:42:49 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:04.523 10:42:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:04.523 10:42:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:04.523 10:42:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:04.523 ************************************ 00:34:04.523 START TEST nvmf_auth_host 00:34:04.523 ************************************ 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:04.523 * Looking for test storage... 00:34:04.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:34:04.523 10:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:11.118 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:11.118 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:11.118 Found net devices under 0000:86:00.0: cvl_0_0 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:11.118 Found net devices under 0000:86:00.1: cvl_0_1 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:11.118 10:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:11.118 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:11.118 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:11.118 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:11.118 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:11.118 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:11.118 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:11.118 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:11.118 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:11.118 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:34:11.118 00:34:11.118 --- 10.0.0.2 ping statistics --- 00:34:11.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.118 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:34:11.118 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:11.118 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:11.118 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:34:11.118 00:34:11.118 --- 10.0.0.1 ping statistics --- 00:34:11.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.119 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2601769 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2601769 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2601769 ']' 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=58321234500c5fb2c8033dd77a2a4f45 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.LiA 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 58321234500c5fb2c8033dd77a2a4f45 0 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 58321234500c5fb2c8033dd77a2a4f45 0 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=58321234500c5fb2c8033dd77a2a4f45 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.LiA 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.LiA 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.LiA 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=14a004f7bc03b18005eedc554a826a2a8e7016fbf48a696a76c2159e4dd15d08 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.pd9 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 14a004f7bc03b18005eedc554a826a2a8e7016fbf48a696a76c2159e4dd15d08 3 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 14a004f7bc03b18005eedc554a826a2a8e7016fbf48a696a76c2159e4dd15d08 3 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=14a004f7bc03b18005eedc554a826a2a8e7016fbf48a696a76c2159e4dd15d08 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.pd9 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.pd9 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.pd9 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=356f45b33cc27a122fb875e8e50c12ec135192d16a7e5cd7 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ixf 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 356f45b33cc27a122fb875e8e50c12ec135192d16a7e5cd7 0 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 356f45b33cc27a122fb875e8e50c12ec135192d16a7e5cd7 0 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=356f45b33cc27a122fb875e8e50c12ec135192d16a7e5cd7 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ixf 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ixf 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.ixf 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8d0a03c02b99b5717af42767e4a35cfb31bae89d3a74b5b9 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.etd 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8d0a03c02b99b5717af42767e4a35cfb31bae89d3a74b5b9 2 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8d0a03c02b99b5717af42767e4a35cfb31bae89d3a74b5b9 2 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8d0a03c02b99b5717af42767e4a35cfb31bae89d3a74b5b9 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.etd 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.etd 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.etd 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=48daf38e103493d7c3c420ebc79b4b04 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.lS8 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 48daf38e103493d7c3c420ebc79b4b04 1 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 48daf38e103493d7c3c420ebc79b4b04 1 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=48daf38e103493d7c3c420ebc79b4b04 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:34:11.119 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.lS8 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.lS8 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.lS8 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1679cedcfccda1f7b285941f43e685e6 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.8xL 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1679cedcfccda1f7b285941f43e685e6 1 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1679cedcfccda1f7b285941f43e685e6 1 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1679cedcfccda1f7b285941f43e685e6 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.8xL 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.8xL 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.8xL 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=aa0b2048dcf93ee24f4aaa8251cfbcd63996748875367af4 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.uVg 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key aa0b2048dcf93ee24f4aaa8251cfbcd63996748875367af4 2 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 aa0b2048dcf93ee24f4aaa8251cfbcd63996748875367af4 2 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=aa0b2048dcf93ee24f4aaa8251cfbcd63996748875367af4 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.uVg 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.uVg 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.uVg 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b295d4f2dc20f9047843db69e5626fcd 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.5mC 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b295d4f2dc20f9047843db69e5626fcd 0 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b295d4f2dc20f9047843db69e5626fcd 0 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b295d4f2dc20f9047843db69e5626fcd 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.5mC 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.5mC 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.5mC 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=afc10f31585aa5526c196965224299039151494cfbfd8484c136c78c5180ad4c 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.4KB 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key afc10f31585aa5526c196965224299039151494cfbfd8484c136c78c5180ad4c 3 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 afc10f31585aa5526c196965224299039151494cfbfd8484c136c78c5180ad4c 3 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=afc10f31585aa5526c196965224299039151494cfbfd8484c136c78c5180ad4c 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.4KB 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.4KB 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.4KB 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2601769 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2601769 ']' 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:11.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:11.120 10:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.LiA 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.pd9 ]] 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pd9 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.ixf 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.etd ]] 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.etd 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.lS8 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.8xL ]] 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8xL 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.uVg 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.5mC ]] 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.5mC 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.4KB 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:34:11.379 10:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:34:11.380 10:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:11.380 10:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:13.912 Waiting for block devices as requested 00:34:14.171 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:14.171 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:14.171 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:14.430 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:14.430 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:14.430 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:14.430 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:14.689 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:14.689 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:14.689 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:14.948 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:14.948 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:14.948 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:14.948 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:15.207 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:15.207 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:15.207 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:15.774 10:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:34:15.774 10:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:15.774 10:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:34:15.774 10:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:34:15.774 10:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:15.774 10:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:34:15.774 10:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:34:15.774 10:43:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:34:15.774 10:43:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:16.033 No valid GPT data, bailing 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:16.033 00:34:16.033 Discovery Log Number of Records 2, Generation counter 2 00:34:16.033 =====Discovery Log Entry 0====== 00:34:16.033 trtype: tcp 00:34:16.033 adrfam: ipv4 00:34:16.033 subtype: current discovery subsystem 00:34:16.033 treq: not specified, sq flow control disable supported 00:34:16.033 portid: 1 00:34:16.033 trsvcid: 4420 00:34:16.033 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:16.033 traddr: 10.0.0.1 00:34:16.033 eflags: none 00:34:16.033 sectype: none 00:34:16.033 =====Discovery Log Entry 1====== 00:34:16.033 trtype: tcp 00:34:16.033 adrfam: ipv4 00:34:16.033 subtype: nvme subsystem 00:34:16.033 treq: not specified, sq flow control disable supported 00:34:16.033 portid: 1 00:34:16.033 trsvcid: 4420 00:34:16.033 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:16.033 traddr: 10.0.0.1 00:34:16.033 eflags: none 00:34:16.033 sectype: none 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: ]] 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.033 10:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.292 nvme0n1 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: ]] 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:16.292 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.293 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.293 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.293 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.293 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:16.293 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:16.293 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:16.293 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.293 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.293 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:16.293 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.293 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:16.293 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:16.293 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:16.293 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:16.293 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.293 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.551 nvme0n1 00:34:16.551 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.551 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.551 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.551 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.551 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.551 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.551 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.551 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.551 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.551 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.551 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.551 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.551 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:16.551 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.551 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: ]] 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.552 nvme0n1 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.552 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: ]] 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.811 nvme0n1 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.811 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: ]] 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.070 nvme0n1 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.070 10:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.070 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.070 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.071 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.330 nvme0n1 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: ]] 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.330 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:17.331 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:17.331 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:17.331 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.331 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.331 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:17.331 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.331 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:17.331 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:17.331 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:17.331 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:17.331 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.331 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.590 nvme0n1 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: ]] 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.590 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.849 nvme0n1 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: ]] 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.849 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.108 nvme0n1 00:34:18.108 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.108 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.108 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.108 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.108 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.108 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.108 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.108 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.108 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.108 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.108 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.108 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.108 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:18.108 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.108 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:18.108 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:18.108 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:18.108 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:18.108 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:18.108 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:18.108 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:18.108 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:18.108 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: ]] 00:34:18.108 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:18.109 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:18.109 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.109 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:18.109 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:18.109 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:18.109 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.109 10:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:18.109 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.109 10:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.109 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.109 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.109 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:18.109 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:18.109 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:18.109 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.109 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.109 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:18.109 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.109 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:18.109 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:18.109 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:18.109 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:18.109 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.109 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.375 nvme0n1 00:34:18.375 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.376 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.376 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.376 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.376 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.376 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.376 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.376 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.376 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.376 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.376 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.376 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.376 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:18.376 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.376 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:18.376 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:18.376 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:18.376 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:18.376 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:18.376 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:18.376 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:18.376 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:18.376 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:18.377 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:18.377 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.377 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:18.377 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:18.377 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:18.377 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.377 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:18.377 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.377 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.377 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.377 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.377 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:18.377 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:18.377 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:18.377 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.377 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.377 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:18.377 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.377 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:18.377 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:18.378 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:18.378 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:18.378 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.378 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.694 nvme0n1 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: ]] 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.694 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.973 nvme0n1 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: ]] 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.973 10:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.232 nvme0n1 00:34:19.232 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.232 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.232 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.232 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.232 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.232 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.232 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.232 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.232 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.232 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.232 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.232 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: ]] 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.233 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.492 nvme0n1 00:34:19.492 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.492 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.492 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.492 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.492 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.492 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.492 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.492 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: ]] 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.751 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.011 nvme0n1 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.011 10:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.270 nvme0n1 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: ]] 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.270 10:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.839 nvme0n1 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: ]] 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.839 10:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.099 nvme0n1 00:34:21.099 10:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.099 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.099 10:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.099 10:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.099 10:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: ]] 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.099 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.668 nvme0n1 00:34:21.668 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.668 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.668 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.668 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.668 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.668 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.668 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.668 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: ]] 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.669 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.928 nvme0n1 00:34:21.928 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.928 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.928 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.928 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.928 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.187 10:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.446 nvme0n1 00:34:22.446 10:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.446 10:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.446 10:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.446 10:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.446 10:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.446 10:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.446 10:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.446 10:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.446 10:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.446 10:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: ]] 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.705 10:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.273 nvme0n1 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: ]] 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.273 10:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.274 10:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:23.274 10:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.274 10:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:23.274 10:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:23.274 10:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:23.274 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:23.274 10:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.274 10:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.841 nvme0n1 00:34:23.841 10:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.841 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.841 10:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.841 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.841 10:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.841 10:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.841 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: ]] 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.842 10:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.409 nvme0n1 00:34:24.409 10:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.409 10:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.409 10:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.409 10:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.409 10:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.409 10:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.667 10:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.667 10:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.667 10:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.667 10:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.667 10:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.667 10:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.667 10:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:24.667 10:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.667 10:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:24.667 10:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:24.667 10:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:24.667 10:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:24.667 10:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:24.667 10:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:24.667 10:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:24.667 10:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:24.667 10:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: ]] 00:34:24.667 10:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:24.668 10:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:24.668 10:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.668 10:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:24.668 10:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:24.668 10:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:24.668 10:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.668 10:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:24.668 10:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.668 10:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.668 10:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.668 10:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.668 10:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:24.668 10:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:24.668 10:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:24.668 10:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.668 10:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.668 10:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:24.668 10:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.668 10:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:24.668 10:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:24.668 10:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:24.668 10:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:24.668 10:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.668 10:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.235 nvme0n1 00:34:25.235 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.235 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.235 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.235 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.235 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.235 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.235 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.235 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.235 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.235 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.235 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.235 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.236 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.803 nvme0n1 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: ]] 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.803 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.061 nvme0n1 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: ]] 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.061 10:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.320 nvme0n1 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: ]] 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.320 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.579 nvme0n1 00:34:26.579 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.579 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.579 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.579 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.579 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.579 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.579 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.579 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.579 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.579 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.579 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.579 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.579 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:26.579 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.579 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:26.579 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:26.579 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:26.579 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:26.579 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:26.579 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:26.579 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:26.579 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:26.580 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: ]] 00:34:26.580 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:26.580 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:26.580 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.580 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:26.580 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:26.580 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:26.580 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.580 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:26.580 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.580 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.580 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.580 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.580 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:26.580 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:26.580 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:26.580 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.580 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.580 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:26.580 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.580 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:26.580 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:26.580 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:26.580 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:26.580 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.580 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.839 nvme0n1 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.839 nvme0n1 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.839 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: ]] 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.098 10:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.098 nvme0n1 00:34:27.098 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.098 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.098 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.098 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.098 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.098 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: ]] 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.358 nvme0n1 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.358 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: ]] 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.617 nvme0n1 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.617 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: ]] 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.875 nvme0n1 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.875 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.134 10:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.134 nvme0n1 00:34:28.134 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.134 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.134 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.134 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.134 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.134 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.134 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.134 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.134 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.134 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.411 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.411 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:28.411 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.411 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:28.411 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.411 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:28.411 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:28.411 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:28.411 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:28.411 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:28.411 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:28.411 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:28.411 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:28.411 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: ]] 00:34:28.411 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:28.411 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:28.411 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.411 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:28.411 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:28.411 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:28.411 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.411 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:28.411 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.411 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.412 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.412 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.412 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:28.412 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:28.412 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:28.412 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.412 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.412 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:28.412 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.412 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:28.412 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:28.412 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:28.412 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:28.412 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.412 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.675 nvme0n1 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: ]] 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.675 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.934 nvme0n1 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: ]] 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.934 10:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.193 nvme0n1 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: ]] 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.193 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.452 nvme0n1 00:34:29.452 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.452 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.452 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.452 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.452 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.452 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.452 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.452 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.452 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.452 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.711 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.970 nvme0n1 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: ]] 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.970 10:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.229 nvme0n1 00:34:30.229 10:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.229 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.229 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.229 10:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.229 10:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.229 10:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.229 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.229 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.229 10:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.229 10:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.488 10:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.488 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.488 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:30.488 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.488 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:30.488 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:30.488 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:30.488 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:30.488 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:30.488 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:30.488 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:30.488 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:30.488 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: ]] 00:34:30.488 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:30.488 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:30.488 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.488 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:30.488 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:30.488 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:30.488 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.488 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:30.488 10:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.488 10:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.488 10:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.488 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.488 10:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:30.488 10:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:30.488 10:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:30.488 10:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.489 10:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.489 10:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:30.489 10:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.489 10:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:30.489 10:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:30.489 10:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:30.489 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:30.489 10:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.489 10:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.748 nvme0n1 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: ]] 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:30.748 10:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:30.749 10:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:30.749 10:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.749 10:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.749 10:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:30.749 10:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.749 10:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:30.749 10:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:30.749 10:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:30.749 10:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:30.749 10:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.749 10:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.316 nvme0n1 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: ]] 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.316 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.317 10:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:31.317 10:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:31.317 10:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:31.317 10:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.317 10:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.317 10:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:31.317 10:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.317 10:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:31.317 10:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:31.317 10:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:31.317 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:31.317 10:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.317 10:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.575 nvme0n1 00:34:31.575 10:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.575 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.575 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.575 10:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.575 10:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.834 10:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.094 nvme0n1 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: ]] 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:32.094 10:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:32.353 10:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.353 10:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.353 10:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:32.353 10:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.353 10:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:32.353 10:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:32.353 10:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:32.353 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:32.353 10:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.353 10:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.983 nvme0n1 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: ]] 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.983 10:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.551 nvme0n1 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: ]] 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.551 10:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.119 nvme0n1 00:34:34.119 10:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.119 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.119 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.119 10:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.119 10:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.119 10:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.119 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.119 10:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.119 10:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.119 10:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: ]] 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.119 10:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.687 nvme0n1 00:34:34.687 10:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.687 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.688 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.688 10:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.688 10:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.688 10:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.688 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.688 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.688 10:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.688 10:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.946 10:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.514 nvme0n1 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: ]] 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:35.514 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:35.515 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:35.515 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.515 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:35.515 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.515 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.515 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.515 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.515 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:35.515 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:35.515 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:35.515 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.515 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.515 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:35.515 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.515 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:35.515 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:35.515 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:35.515 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:35.515 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.515 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.773 nvme0n1 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: ]] 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:35.773 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.774 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:35.774 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.774 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.774 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.774 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.774 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:35.774 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:35.774 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:35.774 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.774 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.774 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:35.774 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.774 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:35.774 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:35.774 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:35.774 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:35.774 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.774 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.774 nvme0n1 00:34:35.774 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.774 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.774 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.774 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.774 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.774 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: ]] 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:36.032 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.033 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:36.033 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:36.033 10:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:36.033 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:36.033 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.033 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.033 nvme0n1 00:34:36.033 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.033 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.033 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.033 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.033 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.033 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.033 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.033 10:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.033 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.033 10:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: ]] 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.292 nvme0n1 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.292 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.551 nvme0n1 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: ]] 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:36.551 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.552 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.552 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:36.552 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.552 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:36.552 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:36.552 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:36.552 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:36.552 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.552 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.811 nvme0n1 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: ]] 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.811 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.070 nvme0n1 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: ]] 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.070 10:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.329 nvme0n1 00:34:37.329 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.329 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.329 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.329 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.329 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.329 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.329 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.329 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.329 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.329 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.329 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.329 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.329 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:37.329 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: ]] 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.330 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.589 nvme0n1 00:34:37.589 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.589 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.589 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.589 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.589 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.589 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.589 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.589 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.589 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.589 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.589 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.589 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.589 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:37.589 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.589 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:37.589 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:37.589 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:37.589 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:37.589 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:37.589 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:37.589 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:37.589 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:37.589 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:37.589 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:37.589 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.590 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:37.590 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:37.590 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:37.590 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.590 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:37.590 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.590 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.590 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.590 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.590 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:37.590 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:37.590 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:37.590 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.590 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.590 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:37.590 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.590 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:37.590 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:37.590 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:37.590 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:37.590 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.590 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.849 nvme0n1 00:34:37.849 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.849 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.849 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.849 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.849 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.849 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.849 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.849 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.849 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.849 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.849 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.849 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:37.849 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.849 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:37.849 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.849 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:37.849 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:37.849 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:37.849 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: ]] 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.850 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.109 nvme0n1 00:34:38.109 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.109 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.109 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.109 10:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.109 10:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.109 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.109 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.109 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.109 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.109 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.109 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.109 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.109 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:38.109 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.109 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:38.109 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:38.109 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:38.109 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: ]] 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.110 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.371 nvme0n1 00:34:38.371 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.371 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.371 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.371 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.371 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.371 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: ]] 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.630 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.888 nvme0n1 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: ]] 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.888 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.147 nvme0n1 00:34:39.147 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.147 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.147 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.147 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.147 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.147 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.147 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.147 10:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.147 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.147 10:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.147 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.405 nvme0n1 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: ]] 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.405 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.972 nvme0n1 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: ]] 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.972 10:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.230 nvme0n1 00:34:40.230 10:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.230 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.230 10:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.230 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.230 10:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.230 10:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: ]] 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.489 10:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.748 nvme0n1 00:34:40.748 10:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.748 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.748 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.748 10:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.748 10:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.748 10:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.748 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.748 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.748 10:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.748 10:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.748 10:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.748 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.748 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:40.748 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.748 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.748 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:40.748 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:40.748 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:40.748 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:40.748 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.748 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:40.748 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:40.748 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: ]] 00:34:40.748 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:40.748 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:40.749 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.749 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.749 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:40.749 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:40.749 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.749 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:40.749 10:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.749 10:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.749 10:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.749 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.749 10:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:40.749 10:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:40.749 10:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:40.749 10:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.749 10:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.749 10:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:40.749 10:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.749 10:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:40.749 10:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:40.749 10:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:40.749 10:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:40.749 10:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.749 10:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.315 nvme0n1 00:34:41.315 10:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.315 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.315 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.315 10:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.315 10:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.315 10:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.315 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.315 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.315 10:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.315 10:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.315 10:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.315 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.315 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:41.315 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.315 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.315 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:41.315 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:41.315 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:41.315 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:41.315 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.315 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:41.315 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:41.315 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:41.315 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:41.315 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.315 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.315 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:41.315 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:41.316 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.316 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:41.316 10:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.316 10:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.316 10:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.316 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.316 10:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:41.316 10:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:41.316 10:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:41.316 10:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.316 10:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.316 10:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:41.316 10:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.316 10:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:41.316 10:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:41.316 10:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:41.316 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:41.316 10:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.316 10:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.883 nvme0n1 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgzMjEyMzQ1MDBjNWZiMmM4MDMzZGQ3N2EyYTRmNDVGfqpN: 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: ]] 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRhMDA0ZjdiYzAzYjE4MDA1ZWVkYzU1NGE4MjZhMmE4ZTcwMTZmYmY0OGE2OTZhNzZjMjE1OWU0ZGQxNWQwOIij5kg=: 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.883 10:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.451 nvme0n1 00:34:42.451 10:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.451 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.451 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.451 10:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.451 10:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.451 10:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.451 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.451 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.451 10:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.451 10:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.451 10:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.451 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.451 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:42.451 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.451 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: ]] 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.452 10:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.019 nvme0n1 00:34:43.019 10:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.019 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.019 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.019 10:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.019 10:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.019 10:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.019 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.019 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.019 10:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.019 10:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.019 10:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.019 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.019 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:43.019 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.019 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:43.019 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:43.019 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:43.019 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:43.019 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:43.019 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:43.019 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:43.019 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhkYWYzOGUxMDM0OTNkN2MzYzQyMGViYzc5YjRiMDScupjS: 00:34:43.019 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: ]] 00:34:43.020 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTY3OWNlZGNmY2NkYTFmN2IyODU5NDFmNDNlNjg1ZTYz1r0q: 00:34:43.020 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:43.020 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.020 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:43.020 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:43.020 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:43.020 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.020 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:43.020 10:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.020 10:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.020 10:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.020 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.020 10:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:43.020 10:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:43.020 10:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:43.020 10:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.020 10:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.020 10:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:43.020 10:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.020 10:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:43.020 10:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:43.020 10:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:43.020 10:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:43.020 10:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.020 10:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.586 nvme0n1 00:34:43.586 10:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.586 10:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.586 10:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.586 10:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.586 10:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.586 10:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWEwYjIwNDhkY2Y5M2VlMjRmNGFhYTgyNTFjZmJjZDYzOTk2NzQ4ODc1MzY3YWY0qpEjUg==: 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: ]] 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjI5NWQ0ZjJkYzIwZjkwNDc4NDNkYjY5ZTU2MjZmY2RTq37m: 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.845 10:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.413 nvme0n1 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZjMTBmMzE1ODVhYTU1MjZjMTk2OTY1MjI0Mjk5MDM5MTUxNDk0Y2ZiZmQ4NDg0YzEzNmM3OGM1MTgwYWQ0Y3DwZ3E=: 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.413 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.981 nvme0n1 00:34:44.981 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.981 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.981 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.981 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.981 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.981 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.981 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.982 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.982 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.982 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.982 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.982 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:44.982 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.982 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:44.982 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:44.982 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:44.982 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:44.982 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:44.982 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:44.982 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:44.982 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2ZjQ1YjMzY2MyN2ExMjJmYjg3NWU4ZTUwYzEyZWMxMzUxOTJkMTZhN2U1Y2Q3Td4DpQ==: 00:34:44.982 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: ]] 00:34:44.982 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGQwYTAzYzAyYjk5YjU3MTdhZjQyNzY3ZTRhMzVjZmIzMWJhZTg5ZDNhNzRiNWI5oV/GiA==: 00:34:44.982 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:44.982 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.982 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.242 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.242 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:45.242 10:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:45.242 10:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:45.242 10:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:45.242 10:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.242 10:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.242 10:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:45.242 10:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.242 10:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:45.242 10:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:45.242 10:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:45.242 10:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:45.242 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:34:45.242 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:45.242 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:45.242 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:45.242 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:45.242 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:45.242 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:45.242 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.242 10:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.242 request: 00:34:45.242 { 00:34:45.242 "name": "nvme0", 00:34:45.242 "trtype": "tcp", 00:34:45.242 "traddr": "10.0.0.1", 00:34:45.242 "adrfam": "ipv4", 00:34:45.242 "trsvcid": "4420", 00:34:45.242 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:45.242 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:45.242 "prchk_reftag": false, 00:34:45.242 "prchk_guard": false, 00:34:45.242 "hdgst": false, 00:34:45.242 "ddgst": false, 00:34:45.242 "method": "bdev_nvme_attach_controller", 00:34:45.242 "req_id": 1 00:34:45.242 } 00:34:45.242 Got JSON-RPC error response 00:34:45.242 response: 00:34:45.242 { 00:34:45.242 "code": -5, 00:34:45.242 "message": "Input/output error" 00:34:45.242 } 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.242 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.242 request: 00:34:45.242 { 00:34:45.242 "name": "nvme0", 00:34:45.242 "trtype": "tcp", 00:34:45.242 "traddr": "10.0.0.1", 00:34:45.242 "adrfam": "ipv4", 00:34:45.242 "trsvcid": "4420", 00:34:45.242 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:45.242 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:45.243 "prchk_reftag": false, 00:34:45.243 "prchk_guard": false, 00:34:45.243 "hdgst": false, 00:34:45.243 "ddgst": false, 00:34:45.243 "dhchap_key": "key2", 00:34:45.243 "method": "bdev_nvme_attach_controller", 00:34:45.243 "req_id": 1 00:34:45.243 } 00:34:45.243 Got JSON-RPC error response 00:34:45.243 response: 00:34:45.243 { 00:34:45.243 "code": -5, 00:34:45.243 "message": "Input/output error" 00:34:45.243 } 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.243 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.503 request: 00:34:45.503 { 00:34:45.503 "name": "nvme0", 00:34:45.503 "trtype": "tcp", 00:34:45.503 "traddr": "10.0.0.1", 00:34:45.503 "adrfam": "ipv4", 00:34:45.503 "trsvcid": "4420", 00:34:45.503 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:45.503 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:45.503 "prchk_reftag": false, 00:34:45.503 "prchk_guard": false, 00:34:45.503 "hdgst": false, 00:34:45.503 "ddgst": false, 00:34:45.503 "dhchap_key": "key1", 00:34:45.503 "dhchap_ctrlr_key": "ckey2", 00:34:45.503 "method": "bdev_nvme_attach_controller", 00:34:45.503 "req_id": 1 00:34:45.503 } 00:34:45.503 Got JSON-RPC error response 00:34:45.503 response: 00:34:45.503 { 00:34:45.503 "code": -5, 00:34:45.503 "message": "Input/output error" 00:34:45.503 } 00:34:45.503 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:45.503 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:34:45.503 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:45.503 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:45.503 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:45.503 10:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:34:45.503 10:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:34:45.503 10:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:45.503 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:45.503 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:34:45.503 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:45.503 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:34:45.503 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:45.503 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:45.503 rmmod nvme_tcp 00:34:45.503 rmmod nvme_fabrics 00:34:45.503 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:45.503 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:34:45.503 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:34:45.503 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2601769 ']' 00:34:45.503 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2601769 00:34:45.503 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 2601769 ']' 00:34:45.503 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 2601769 00:34:45.503 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:34:45.503 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:45.503 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2601769 00:34:45.503 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:45.503 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:45.503 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2601769' 00:34:45.503 killing process with pid 2601769 00:34:45.503 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 2601769 00:34:45.503 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 2601769 00:34:45.762 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:45.762 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:45.762 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:45.762 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:45.762 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:45.762 10:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:45.762 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:45.762 10:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.714 10:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:47.714 10:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:47.714 10:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:47.714 10:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:47.714 10:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:47.714 10:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:34:47.714 10:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:47.714 10:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:47.714 10:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:47.714 10:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:47.714 10:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:34:47.714 10:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:34:47.977 10:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:50.507 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:50.507 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:50.507 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:50.507 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:50.507 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:50.507 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:50.507 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:50.766 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:50.766 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:50.766 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:50.766 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:50.766 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:50.766 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:50.766 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:50.766 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:50.766 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:51.701 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:51.701 10:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.LiA /tmp/spdk.key-null.ixf /tmp/spdk.key-sha256.lS8 /tmp/spdk.key-sha384.uVg /tmp/spdk.key-sha512.4KB /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:51.701 10:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:54.230 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:54.230 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:54.230 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:54.230 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:54.230 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:54.230 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:54.230 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:54.230 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:54.230 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:54.230 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:54.230 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:54.230 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:54.230 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:54.230 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:54.230 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:54.230 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:54.230 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:34:54.489 00:34:54.489 real 0m49.936s 00:34:54.489 user 0m44.760s 00:34:54.489 sys 0m12.044s 00:34:54.489 10:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:54.489 10:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.489 ************************************ 00:34:54.489 END TEST nvmf_auth_host 00:34:54.489 ************************************ 00:34:54.489 10:43:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:54.489 10:43:39 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:34:54.489 10:43:39 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:54.489 10:43:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:54.489 10:43:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:54.489 10:43:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:54.489 ************************************ 00:34:54.489 START TEST nvmf_digest 00:34:54.489 ************************************ 00:34:54.489 10:43:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:54.489 * Looking for test storage... 00:34:54.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:54.489 10:43:39 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:54.489 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:54.489 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:54.489 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:54.489 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:54.489 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:54.489 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:54.489 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:54.489 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:54.489 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:54.489 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:54.489 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:34:54.748 10:43:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:00.022 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:00.022 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:00.022 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:00.023 Found net devices under 0000:86:00.0: cvl_0_0 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:00.023 Found net devices under 0000:86:00.1: cvl_0_1 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:00.023 10:43:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:00.282 10:43:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:00.282 10:43:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:00.282 10:43:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:00.282 10:43:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:00.282 10:43:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:00.282 10:43:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:00.282 10:43:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:00.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:00.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:35:00.282 00:35:00.282 --- 10.0.0.2 ping statistics --- 00:35:00.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:00.282 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:35:00.282 10:43:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:00.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:00.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:35:00.282 00:35:00.282 --- 10.0.0.1 ping statistics --- 00:35:00.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:00.282 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:35:00.282 10:43:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:00.282 10:43:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:35:00.282 10:43:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:00.282 10:43:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:00.282 10:43:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:00.283 10:43:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:00.283 10:43:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:00.283 10:43:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:00.283 10:43:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:00.283 10:43:45 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:00.283 10:43:45 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:35:00.283 10:43:45 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:35:00.283 10:43:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:00.283 10:43:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:00.283 10:43:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:00.283 ************************************ 00:35:00.283 START TEST nvmf_digest_clean 00:35:00.283 ************************************ 00:35:00.283 10:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:35:00.283 10:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:35:00.283 10:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:35:00.283 10:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:35:00.283 10:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:35:00.283 10:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:35:00.283 10:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:00.283 10:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:00.283 10:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:00.283 10:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2614782 00:35:00.283 10:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2614782 00:35:00.283 10:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:00.283 10:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2614782 ']' 00:35:00.283 10:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:00.283 10:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:00.283 10:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:00.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:00.283 10:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:00.283 10:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:00.542 [2024-07-14 10:43:45.272921] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:00.542 [2024-07-14 10:43:45.272963] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:00.542 EAL: No free 2048 kB hugepages reported on node 1 00:35:00.542 [2024-07-14 10:43:45.345060] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:00.542 [2024-07-14 10:43:45.385665] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:00.542 [2024-07-14 10:43:45.385703] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:00.542 [2024-07-14 10:43:45.385710] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:00.542 [2024-07-14 10:43:45.385717] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:00.542 [2024-07-14 10:43:45.385722] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:00.542 [2024-07-14 10:43:45.385740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:01.111 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:01.111 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:35:01.111 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:01.111 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:01.111 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:01.370 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:01.370 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:35:01.370 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:35:01.370 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:35:01.370 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.370 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:01.370 null0 00:35:01.370 [2024-07-14 10:43:46.191345] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:01.370 [2024-07-14 10:43:46.215500] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:01.370 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.370 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:35:01.370 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:01.370 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:01.370 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:01.370 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:01.371 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:01.371 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:01.371 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2614993 00:35:01.371 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2614993 /var/tmp/bperf.sock 00:35:01.371 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:01.371 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2614993 ']' 00:35:01.371 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:01.371 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:01.371 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:01.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:01.371 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:01.371 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:01.371 [2024-07-14 10:43:46.266086] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:01.371 [2024-07-14 10:43:46.266130] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2614993 ] 00:35:01.371 EAL: No free 2048 kB hugepages reported on node 1 00:35:01.371 [2024-07-14 10:43:46.330752] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:01.629 [2024-07-14 10:43:46.372079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:01.629 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:01.629 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:35:01.629 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:01.629 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:01.629 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:01.887 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:01.887 10:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:02.146 nvme0n1 00:35:02.146 10:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:02.146 10:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:02.406 Running I/O for 2 seconds... 00:35:04.309 00:35:04.309 Latency(us) 00:35:04.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:04.309 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:04.309 nvme0n1 : 2.00 25604.20 100.02 0.00 0.00 4994.53 2336.50 11568.53 00:35:04.309 =================================================================================================================== 00:35:04.309 Total : 25604.20 100.02 0.00 0.00 4994.53 2336.50 11568.53 00:35:04.309 0 00:35:04.309 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:04.309 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:04.309 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:04.309 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:04.309 | select(.opcode=="crc32c") 00:35:04.309 | "\(.module_name) \(.executed)"' 00:35:04.309 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:04.567 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:04.567 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:04.567 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:04.567 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:04.567 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2614993 00:35:04.567 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2614993 ']' 00:35:04.567 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2614993 00:35:04.567 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:35:04.567 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:04.567 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2614993 00:35:04.567 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:04.567 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:04.567 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2614993' 00:35:04.567 killing process with pid 2614993 00:35:04.567 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2614993 00:35:04.567 Received shutdown signal, test time was about 2.000000 seconds 00:35:04.567 00:35:04.567 Latency(us) 00:35:04.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:04.567 =================================================================================================================== 00:35:04.567 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:04.567 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2614993 00:35:04.825 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:35:04.825 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:04.825 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:04.825 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:04.825 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:04.825 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:04.825 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:04.825 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2615507 00:35:04.825 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2615507 /var/tmp/bperf.sock 00:35:04.825 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:04.825 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2615507 ']' 00:35:04.825 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:04.825 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:04.825 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:04.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:04.825 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:04.825 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:04.825 [2024-07-14 10:43:49.618418] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:04.825 [2024-07-14 10:43:49.618466] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2615507 ] 00:35:04.825 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:04.825 Zero copy mechanism will not be used. 00:35:04.825 EAL: No free 2048 kB hugepages reported on node 1 00:35:04.826 [2024-07-14 10:43:49.686510] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:04.826 [2024-07-14 10:43:49.725858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:04.826 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:04.826 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:35:04.826 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:04.826 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:04.826 10:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:05.084 10:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:05.084 10:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:05.343 nvme0n1 00:35:05.343 10:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:05.343 10:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:05.601 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:05.601 Zero copy mechanism will not be used. 00:35:05.601 Running I/O for 2 seconds... 00:35:07.506 00:35:07.506 Latency(us) 00:35:07.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.506 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:07.506 nvme0n1 : 2.00 5391.24 673.90 0.00 0.00 2964.95 676.73 10200.82 00:35:07.506 =================================================================================================================== 00:35:07.506 Total : 5391.24 673.90 0.00 0.00 2964.95 676.73 10200.82 00:35:07.506 0 00:35:07.506 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:07.506 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:07.506 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:07.506 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:07.506 | select(.opcode=="crc32c") 00:35:07.506 | "\(.module_name) \(.executed)"' 00:35:07.506 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:07.765 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:07.765 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:07.765 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:07.765 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:07.765 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2615507 00:35:07.765 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2615507 ']' 00:35:07.765 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2615507 00:35:07.765 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:35:07.765 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:07.765 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2615507 00:35:07.765 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:07.765 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:07.765 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2615507' 00:35:07.765 killing process with pid 2615507 00:35:07.765 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2615507 00:35:07.765 Received shutdown signal, test time was about 2.000000 seconds 00:35:07.765 00:35:07.765 Latency(us) 00:35:07.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.765 =================================================================================================================== 00:35:07.765 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:07.765 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2615507 00:35:08.120 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:08.120 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:08.120 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:08.120 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:08.120 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:08.121 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:08.121 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:08.121 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2616149 00:35:08.121 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2616149 /var/tmp/bperf.sock 00:35:08.121 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:08.121 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2616149 ']' 00:35:08.121 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:08.121 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:08.121 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:08.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:08.121 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:08.121 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:08.121 [2024-07-14 10:43:52.863323] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:08.121 [2024-07-14 10:43:52.863371] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2616149 ] 00:35:08.121 EAL: No free 2048 kB hugepages reported on node 1 00:35:08.121 [2024-07-14 10:43:52.932608] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.121 [2024-07-14 10:43:52.973211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:08.121 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:08.121 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:35:08.121 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:08.121 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:08.121 10:43:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:08.378 10:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:08.378 10:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:08.943 nvme0n1 00:35:08.943 10:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:08.943 10:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:08.943 Running I/O for 2 seconds... 00:35:10.868 00:35:10.868 Latency(us) 00:35:10.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:10.868 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:10.868 nvme0n1 : 2.00 27097.11 105.85 0.00 0.00 4715.22 4131.62 8491.19 00:35:10.868 =================================================================================================================== 00:35:10.868 Total : 27097.11 105.85 0.00 0.00 4715.22 4131.62 8491.19 00:35:10.868 0 00:35:10.868 10:43:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:10.868 10:43:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:10.868 10:43:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:10.868 10:43:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:10.868 | select(.opcode=="crc32c") 00:35:10.868 | "\(.module_name) \(.executed)"' 00:35:10.868 10:43:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:11.126 10:43:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:11.126 10:43:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:11.126 10:43:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:11.126 10:43:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:11.126 10:43:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2616149 00:35:11.126 10:43:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2616149 ']' 00:35:11.126 10:43:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2616149 00:35:11.126 10:43:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:35:11.126 10:43:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:11.126 10:43:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2616149 00:35:11.126 10:43:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:11.126 10:43:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:11.126 10:43:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2616149' 00:35:11.126 killing process with pid 2616149 00:35:11.126 10:43:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2616149 00:35:11.126 Received shutdown signal, test time was about 2.000000 seconds 00:35:11.126 00:35:11.126 Latency(us) 00:35:11.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.126 =================================================================================================================== 00:35:11.126 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:11.126 10:43:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2616149 00:35:11.385 10:43:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:11.385 10:43:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:11.385 10:43:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:11.385 10:43:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:11.385 10:43:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:11.385 10:43:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:11.385 10:43:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:11.385 10:43:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2616622 00:35:11.385 10:43:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2616622 /var/tmp/bperf.sock 00:35:11.385 10:43:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:11.385 10:43:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2616622 ']' 00:35:11.385 10:43:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:11.386 10:43:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:11.386 10:43:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:11.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:11.386 10:43:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:11.386 10:43:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:11.386 [2024-07-14 10:43:56.200657] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:11.386 [2024-07-14 10:43:56.200707] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2616622 ] 00:35:11.386 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:11.386 Zero copy mechanism will not be used. 00:35:11.386 EAL: No free 2048 kB hugepages reported on node 1 00:35:11.386 [2024-07-14 10:43:56.269886] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:11.386 [2024-07-14 10:43:56.308773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:11.386 10:43:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:11.386 10:43:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:35:11.386 10:43:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:11.386 10:43:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:11.386 10:43:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:11.644 10:43:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:11.644 10:43:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:11.902 nvme0n1 00:35:11.902 10:43:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:11.902 10:43:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:12.160 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:12.160 Zero copy mechanism will not be used. 00:35:12.160 Running I/O for 2 seconds... 00:35:14.063 00:35:14.063 Latency(us) 00:35:14.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.063 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:14.063 nvme0n1 : 2.00 6392.17 799.02 0.00 0.00 2498.84 1780.87 11283.59 00:35:14.063 =================================================================================================================== 00:35:14.063 Total : 6392.17 799.02 0.00 0.00 2498.84 1780.87 11283.59 00:35:14.063 0 00:35:14.063 10:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:14.063 10:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:14.063 10:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:14.063 10:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:14.063 | select(.opcode=="crc32c") 00:35:14.063 | "\(.module_name) \(.executed)"' 00:35:14.063 10:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:14.322 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:14.322 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:14.322 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:14.322 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:14.322 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2616622 00:35:14.322 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2616622 ']' 00:35:14.322 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2616622 00:35:14.322 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:35:14.322 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:14.322 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2616622 00:35:14.322 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:14.322 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:14.322 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2616622' 00:35:14.322 killing process with pid 2616622 00:35:14.322 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2616622 00:35:14.322 Received shutdown signal, test time was about 2.000000 seconds 00:35:14.322 00:35:14.322 Latency(us) 00:35:14.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.322 =================================================================================================================== 00:35:14.322 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:14.322 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2616622 00:35:14.581 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2614782 00:35:14.581 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2614782 ']' 00:35:14.581 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2614782 00:35:14.581 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:35:14.581 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:14.581 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2614782 00:35:14.581 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:14.581 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:14.581 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2614782' 00:35:14.581 killing process with pid 2614782 00:35:14.581 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2614782 00:35:14.581 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2614782 00:35:14.840 00:35:14.840 real 0m14.402s 00:35:14.840 user 0m26.858s 00:35:14.840 sys 0m4.532s 00:35:14.840 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:14.840 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:14.840 ************************************ 00:35:14.840 END TEST nvmf_digest_clean 00:35:14.840 ************************************ 00:35:14.840 10:43:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:35:14.840 10:43:59 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:14.840 10:43:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:14.840 10:43:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:14.840 10:43:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:14.840 ************************************ 00:35:14.840 START TEST nvmf_digest_error 00:35:14.840 ************************************ 00:35:14.840 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:35:14.840 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:14.840 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:14.840 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:14.840 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:14.840 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2617153 00:35:14.840 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2617153 00:35:14.840 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:14.840 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2617153 ']' 00:35:14.840 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:14.840 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:14.840 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:14.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:14.840 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:14.840 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:14.840 [2024-07-14 10:43:59.744992] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:14.840 [2024-07-14 10:43:59.745033] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:14.840 EAL: No free 2048 kB hugepages reported on node 1 00:35:14.840 [2024-07-14 10:43:59.812567] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.099 [2024-07-14 10:43:59.852400] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:15.099 [2024-07-14 10:43:59.852438] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:15.099 [2024-07-14 10:43:59.852445] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:15.099 [2024-07-14 10:43:59.852451] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:15.099 [2024-07-14 10:43:59.852456] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:15.099 [2024-07-14 10:43:59.852473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:15.099 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:15.099 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:35:15.099 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:15.099 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:15.099 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:15.099 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:15.099 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:15.099 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.099 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:15.099 [2024-07-14 10:43:59.920904] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:15.099 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.099 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:15.099 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:15.099 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.099 10:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:15.099 null0 00:35:15.099 [2024-07-14 10:44:00.008580] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:15.099 [2024-07-14 10:44:00.032758] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:15.099 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.099 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:15.099 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:15.099 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:15.099 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:15.099 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:15.099 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2617349 00:35:15.099 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2617349 /var/tmp/bperf.sock 00:35:15.099 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:15.099 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2617349 ']' 00:35:15.099 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:15.099 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:15.099 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:15.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:15.099 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:15.099 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:15.358 [2024-07-14 10:44:00.084806] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:15.358 [2024-07-14 10:44:00.084854] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2617349 ] 00:35:15.358 EAL: No free 2048 kB hugepages reported on node 1 00:35:15.358 [2024-07-14 10:44:00.153242] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.358 [2024-07-14 10:44:00.193739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:15.358 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:15.358 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:35:15.358 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:15.358 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:15.616 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:15.616 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.617 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:15.617 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.617 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:15.617 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:15.875 nvme0n1 00:35:15.875 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:15.875 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.875 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:15.875 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.875 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:15.875 10:44:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:15.875 Running I/O for 2 seconds... 00:35:15.875 [2024-07-14 10:44:00.834922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:15.875 [2024-07-14 10:44:00.834957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.875 [2024-07-14 10:44:00.834967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.875 [2024-07-14 10:44:00.846360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:15.875 [2024-07-14 10:44:00.846383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.875 [2024-07-14 10:44:00.846391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.875 [2024-07-14 10:44:00.854765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:15.875 [2024-07-14 10:44:00.854792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.875 [2024-07-14 10:44:00.854801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.134 [2024-07-14 10:44:00.866263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.134 [2024-07-14 10:44:00.866285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.134 [2024-07-14 10:44:00.866293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.134 [2024-07-14 10:44:00.878831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.134 [2024-07-14 10:44:00.878853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.134 [2024-07-14 10:44:00.878861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.134 [2024-07-14 10:44:00.890541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.134 [2024-07-14 10:44:00.890562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.134 [2024-07-14 10:44:00.890571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.134 [2024-07-14 10:44:00.898649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.134 [2024-07-14 10:44:00.898670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.134 [2024-07-14 10:44:00.898678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.134 [2024-07-14 10:44:00.910047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.134 [2024-07-14 10:44:00.910069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.134 [2024-07-14 10:44:00.910077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.134 [2024-07-14 10:44:00.922457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.134 [2024-07-14 10:44:00.922478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.134 [2024-07-14 10:44:00.922486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.134 [2024-07-14 10:44:00.933948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.134 [2024-07-14 10:44:00.933968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.134 [2024-07-14 10:44:00.933977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.134 [2024-07-14 10:44:00.946922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.134 [2024-07-14 10:44:00.946943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.134 [2024-07-14 10:44:00.946950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.134 [2024-07-14 10:44:00.958515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.134 [2024-07-14 10:44:00.958536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.134 [2024-07-14 10:44:00.958544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.134 [2024-07-14 10:44:00.967647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.134 [2024-07-14 10:44:00.967667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.134 [2024-07-14 10:44:00.967675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.134 [2024-07-14 10:44:00.979702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.134 [2024-07-14 10:44:00.979723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.134 [2024-07-14 10:44:00.979731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.134 [2024-07-14 10:44:00.991452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.134 [2024-07-14 10:44:00.991472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.134 [2024-07-14 10:44:00.991480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.134 [2024-07-14 10:44:01.005109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.134 [2024-07-14 10:44:01.005134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.134 [2024-07-14 10:44:01.005142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.134 [2024-07-14 10:44:01.017622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.134 [2024-07-14 10:44:01.017643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.134 [2024-07-14 10:44:01.017651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.134 [2024-07-14 10:44:01.030102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.134 [2024-07-14 10:44:01.030123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.134 [2024-07-14 10:44:01.030131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.134 [2024-07-14 10:44:01.041163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.134 [2024-07-14 10:44:01.041183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.134 [2024-07-14 10:44:01.041191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.134 [2024-07-14 10:44:01.050078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.134 [2024-07-14 10:44:01.050098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.134 [2024-07-14 10:44:01.050110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.134 [2024-07-14 10:44:01.062167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.134 [2024-07-14 10:44:01.062188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.134 [2024-07-14 10:44:01.062196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.134 [2024-07-14 10:44:01.070827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.134 [2024-07-14 10:44:01.070849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.134 [2024-07-14 10:44:01.070858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.134 [2024-07-14 10:44:01.082166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.134 [2024-07-14 10:44:01.082187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.134 [2024-07-14 10:44:01.082195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.134 [2024-07-14 10:44:01.094286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.135 [2024-07-14 10:44:01.094309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.135 [2024-07-14 10:44:01.094317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.135 [2024-07-14 10:44:01.102463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.135 [2024-07-14 10:44:01.102484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.135 [2024-07-14 10:44:01.102493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.393 [2024-07-14 10:44:01.113692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.393 [2024-07-14 10:44:01.113716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.393 [2024-07-14 10:44:01.113724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.393 [2024-07-14 10:44:01.124444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.393 [2024-07-14 10:44:01.124466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.393 [2024-07-14 10:44:01.124475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.393 [2024-07-14 10:44:01.134323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.393 [2024-07-14 10:44:01.134345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.393 [2024-07-14 10:44:01.134354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.393 [2024-07-14 10:44:01.145983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.393 [2024-07-14 10:44:01.146007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.393 [2024-07-14 10:44:01.146016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.393 [2024-07-14 10:44:01.157247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.393 [2024-07-14 10:44:01.157267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.394 [2024-07-14 10:44:01.157275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.394 [2024-07-14 10:44:01.165760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.394 [2024-07-14 10:44:01.165781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.394 [2024-07-14 10:44:01.165789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.394 [2024-07-14 10:44:01.177832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.394 [2024-07-14 10:44:01.177853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.394 [2024-07-14 10:44:01.177861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.394 [2024-07-14 10:44:01.190914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.394 [2024-07-14 10:44:01.190936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.394 [2024-07-14 10:44:01.190944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.394 [2024-07-14 10:44:01.198694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.394 [2024-07-14 10:44:01.198714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.394 [2024-07-14 10:44:01.198722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.394 [2024-07-14 10:44:01.210277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.394 [2024-07-14 10:44:01.210298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.394 [2024-07-14 10:44:01.210306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.394 [2024-07-14 10:44:01.218240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.394 [2024-07-14 10:44:01.218277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.394 [2024-07-14 10:44:01.218285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.394 [2024-07-14 10:44:01.228336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.394 [2024-07-14 10:44:01.228357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.394 [2024-07-14 10:44:01.228365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.394 [2024-07-14 10:44:01.238764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.394 [2024-07-14 10:44:01.238785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.394 [2024-07-14 10:44:01.238794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.394 [2024-07-14 10:44:01.247390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.394 [2024-07-14 10:44:01.247411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.394 [2024-07-14 10:44:01.247419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.394 [2024-07-14 10:44:01.257444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.394 [2024-07-14 10:44:01.257467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.394 [2024-07-14 10:44:01.257475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.394 [2024-07-14 10:44:01.266097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.394 [2024-07-14 10:44:01.266117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.394 [2024-07-14 10:44:01.266125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.394 [2024-07-14 10:44:01.275765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.394 [2024-07-14 10:44:01.275787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.394 [2024-07-14 10:44:01.275795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.394 [2024-07-14 10:44:01.285374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.394 [2024-07-14 10:44:01.285394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.394 [2024-07-14 10:44:01.285403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.394 [2024-07-14 10:44:01.295388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.394 [2024-07-14 10:44:01.295409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.394 [2024-07-14 10:44:01.295417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.394 [2024-07-14 10:44:01.303611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.394 [2024-07-14 10:44:01.303632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.394 [2024-07-14 10:44:01.303640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.394 [2024-07-14 10:44:01.313558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.394 [2024-07-14 10:44:01.313582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.394 [2024-07-14 10:44:01.313591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.394 [2024-07-14 10:44:01.323338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.394 [2024-07-14 10:44:01.323360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:61 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.394 [2024-07-14 10:44:01.323368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.394 [2024-07-14 10:44:01.334085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.394 [2024-07-14 10:44:01.334106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.394 [2024-07-14 10:44:01.334115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.394 [2024-07-14 10:44:01.342726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.394 [2024-07-14 10:44:01.342747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.394 [2024-07-14 10:44:01.342755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.394 [2024-07-14 10:44:01.353685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.394 [2024-07-14 10:44:01.353706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.394 [2024-07-14 10:44:01.353714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.394 [2024-07-14 10:44:01.363913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.394 [2024-07-14 10:44:01.363935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.394 [2024-07-14 10:44:01.363943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.654 [2024-07-14 10:44:01.373848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.654 [2024-07-14 10:44:01.373870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-14 10:44:01.373879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.654 [2024-07-14 10:44:01.382844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.654 [2024-07-14 10:44:01.382865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-14 10:44:01.382873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.654 [2024-07-14 10:44:01.392058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.654 [2024-07-14 10:44:01.392079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-14 10:44:01.392087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.654 [2024-07-14 10:44:01.402336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.654 [2024-07-14 10:44:01.402356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-14 10:44:01.402364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.654 [2024-07-14 10:44:01.410809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.654 [2024-07-14 10:44:01.410830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-14 10:44:01.410838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.654 [2024-07-14 10:44:01.420610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.654 [2024-07-14 10:44:01.420630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-14 10:44:01.420639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.654 [2024-07-14 10:44:01.429686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.654 [2024-07-14 10:44:01.429708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-14 10:44:01.429716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.654 [2024-07-14 10:44:01.439425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.654 [2024-07-14 10:44:01.439447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-14 10:44:01.439455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.654 [2024-07-14 10:44:01.448854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.654 [2024-07-14 10:44:01.448876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-14 10:44:01.448884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.654 [2024-07-14 10:44:01.457818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.654 [2024-07-14 10:44:01.457839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-14 10:44:01.457848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.654 [2024-07-14 10:44:01.467083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.654 [2024-07-14 10:44:01.467104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-14 10:44:01.467112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.654 [2024-07-14 10:44:01.476162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.654 [2024-07-14 10:44:01.476183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-14 10:44:01.476195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.654 [2024-07-14 10:44:01.487591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.654 [2024-07-14 10:44:01.487611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-14 10:44:01.487619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.654 [2024-07-14 10:44:01.498932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.654 [2024-07-14 10:44:01.498954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-14 10:44:01.498963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.655 [2024-07-14 10:44:01.507803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.655 [2024-07-14 10:44:01.507824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.655 [2024-07-14 10:44:01.507833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.655 [2024-07-14 10:44:01.519177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.655 [2024-07-14 10:44:01.519198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.655 [2024-07-14 10:44:01.519206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.655 [2024-07-14 10:44:01.530216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.655 [2024-07-14 10:44:01.530245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.655 [2024-07-14 10:44:01.530253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.655 [2024-07-14 10:44:01.538712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.655 [2024-07-14 10:44:01.538732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.655 [2024-07-14 10:44:01.538741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.655 [2024-07-14 10:44:01.550451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.655 [2024-07-14 10:44:01.550472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.655 [2024-07-14 10:44:01.550480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.655 [2024-07-14 10:44:01.559061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.655 [2024-07-14 10:44:01.559082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.655 [2024-07-14 10:44:01.559090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.655 [2024-07-14 10:44:01.570392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.655 [2024-07-14 10:44:01.570417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.655 [2024-07-14 10:44:01.570426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.655 [2024-07-14 10:44:01.582731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.655 [2024-07-14 10:44:01.582752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.655 [2024-07-14 10:44:01.582761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.655 [2024-07-14 10:44:01.590771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.655 [2024-07-14 10:44:01.590792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.655 [2024-07-14 10:44:01.590800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.655 [2024-07-14 10:44:01.602864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.655 [2024-07-14 10:44:01.602884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.655 [2024-07-14 10:44:01.602892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.655 [2024-07-14 10:44:01.613245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.655 [2024-07-14 10:44:01.613266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.655 [2024-07-14 10:44:01.613274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.655 [2024-07-14 10:44:01.622723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.655 [2024-07-14 10:44:01.622744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.655 [2024-07-14 10:44:01.622752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.655 [2024-07-14 10:44:01.631880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.655 [2024-07-14 10:44:01.631903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.655 [2024-07-14 10:44:01.631911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.915 [2024-07-14 10:44:01.641795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.915 [2024-07-14 10:44:01.641817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.915 [2024-07-14 10:44:01.641825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.915 [2024-07-14 10:44:01.650551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.915 [2024-07-14 10:44:01.650572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.915 [2024-07-14 10:44:01.650580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.915 [2024-07-14 10:44:01.660872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.915 [2024-07-14 10:44:01.660892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.915 [2024-07-14 10:44:01.660901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.915 [2024-07-14 10:44:01.671203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.915 [2024-07-14 10:44:01.671223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.915 [2024-07-14 10:44:01.671238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.915 [2024-07-14 10:44:01.682998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.915 [2024-07-14 10:44:01.683019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.915 [2024-07-14 10:44:01.683027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.915 [2024-07-14 10:44:01.691758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.915 [2024-07-14 10:44:01.691779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.915 [2024-07-14 10:44:01.691787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.915 [2024-07-14 10:44:01.703421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.915 [2024-07-14 10:44:01.703442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.915 [2024-07-14 10:44:01.703450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.915 [2024-07-14 10:44:01.711958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.915 [2024-07-14 10:44:01.711978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.915 [2024-07-14 10:44:01.711986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.915 [2024-07-14 10:44:01.722957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.915 [2024-07-14 10:44:01.722977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.915 [2024-07-14 10:44:01.722985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.915 [2024-07-14 10:44:01.733606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.915 [2024-07-14 10:44:01.733627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.915 [2024-07-14 10:44:01.733635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.915 [2024-07-14 10:44:01.741998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.915 [2024-07-14 10:44:01.742018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.915 [2024-07-14 10:44:01.742029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.915 [2024-07-14 10:44:01.754218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.915 [2024-07-14 10:44:01.754243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.915 [2024-07-14 10:44:01.754251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.915 [2024-07-14 10:44:01.765331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.915 [2024-07-14 10:44:01.765353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.915 [2024-07-14 10:44:01.765361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.915 [2024-07-14 10:44:01.773637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.915 [2024-07-14 10:44:01.773658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.915 [2024-07-14 10:44:01.773666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.915 [2024-07-14 10:44:01.786638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.915 [2024-07-14 10:44:01.786659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.915 [2024-07-14 10:44:01.786667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.915 [2024-07-14 10:44:01.798271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.915 [2024-07-14 10:44:01.798293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.915 [2024-07-14 10:44:01.798301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.915 [2024-07-14 10:44:01.810055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.915 [2024-07-14 10:44:01.810075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.915 [2024-07-14 10:44:01.810084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.915 [2024-07-14 10:44:01.819056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.915 [2024-07-14 10:44:01.819077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.915 [2024-07-14 10:44:01.819085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.915 [2024-07-14 10:44:01.829740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.915 [2024-07-14 10:44:01.829762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.915 [2024-07-14 10:44:01.829770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.915 [2024-07-14 10:44:01.839800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.915 [2024-07-14 10:44:01.839820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.915 [2024-07-14 10:44:01.839828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.915 [2024-07-14 10:44:01.848664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.915 [2024-07-14 10:44:01.848685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.915 [2024-07-14 10:44:01.848694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.915 [2024-07-14 10:44:01.858178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.915 [2024-07-14 10:44:01.858199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.915 [2024-07-14 10:44:01.858207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.915 [2024-07-14 10:44:01.867979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.915 [2024-07-14 10:44:01.868000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.915 [2024-07-14 10:44:01.868008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.915 [2024-07-14 10:44:01.876290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.915 [2024-07-14 10:44:01.876312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.915 [2024-07-14 10:44:01.876320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.915 [2024-07-14 10:44:01.887899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:16.915 [2024-07-14 10:44:01.887921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.915 [2024-07-14 10:44:01.887929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.175 [2024-07-14 10:44:01.897868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.175 [2024-07-14 10:44:01.897891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.175 [2024-07-14 10:44:01.897899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.175 [2024-07-14 10:44:01.906934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.175 [2024-07-14 10:44:01.906954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.175 [2024-07-14 10:44:01.906963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.175 [2024-07-14 10:44:01.916802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.175 [2024-07-14 10:44:01.916823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.175 [2024-07-14 10:44:01.916834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.175 [2024-07-14 10:44:01.925631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.175 [2024-07-14 10:44:01.925652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.175 [2024-07-14 10:44:01.925660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.175 [2024-07-14 10:44:01.935119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.175 [2024-07-14 10:44:01.935140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.175 [2024-07-14 10:44:01.935148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.175 [2024-07-14 10:44:01.944688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.175 [2024-07-14 10:44:01.944708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.175 [2024-07-14 10:44:01.944716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.175 [2024-07-14 10:44:01.954264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.175 [2024-07-14 10:44:01.954285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.175 [2024-07-14 10:44:01.954293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.175 [2024-07-14 10:44:01.963169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.175 [2024-07-14 10:44:01.963189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.175 [2024-07-14 10:44:01.963197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.175 [2024-07-14 10:44:01.972470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.175 [2024-07-14 10:44:01.972490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.175 [2024-07-14 10:44:01.972498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.175 [2024-07-14 10:44:01.981830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.175 [2024-07-14 10:44:01.981851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.175 [2024-07-14 10:44:01.981859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.175 [2024-07-14 10:44:01.993129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.175 [2024-07-14 10:44:01.993150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.175 [2024-07-14 10:44:01.993157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.175 [2024-07-14 10:44:02.000856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.175 [2024-07-14 10:44:02.000883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.175 [2024-07-14 10:44:02.000891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.175 [2024-07-14 10:44:02.011544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.175 [2024-07-14 10:44:02.011566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.175 [2024-07-14 10:44:02.011574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.175 [2024-07-14 10:44:02.024194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.176 [2024-07-14 10:44:02.024215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.176 [2024-07-14 10:44:02.024223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.176 [2024-07-14 10:44:02.032517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.176 [2024-07-14 10:44:02.032537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.176 [2024-07-14 10:44:02.032545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.176 [2024-07-14 10:44:02.043684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.176 [2024-07-14 10:44:02.043705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.176 [2024-07-14 10:44:02.043712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.176 [2024-07-14 10:44:02.052062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.176 [2024-07-14 10:44:02.052083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.176 [2024-07-14 10:44:02.052091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.176 [2024-07-14 10:44:02.063848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.176 [2024-07-14 10:44:02.063869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.176 [2024-07-14 10:44:02.063877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.176 [2024-07-14 10:44:02.075304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.176 [2024-07-14 10:44:02.075324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.176 [2024-07-14 10:44:02.075332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.176 [2024-07-14 10:44:02.084121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.176 [2024-07-14 10:44:02.084141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.176 [2024-07-14 10:44:02.084149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.176 [2024-07-14 10:44:02.096385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.176 [2024-07-14 10:44:02.096405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.176 [2024-07-14 10:44:02.096413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.176 [2024-07-14 10:44:02.104583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.176 [2024-07-14 10:44:02.104603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.176 [2024-07-14 10:44:02.104611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.176 [2024-07-14 10:44:02.116362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.176 [2024-07-14 10:44:02.116383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.176 [2024-07-14 10:44:02.116391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.176 [2024-07-14 10:44:02.124835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.176 [2024-07-14 10:44:02.124855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.176 [2024-07-14 10:44:02.124863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.176 [2024-07-14 10:44:02.135987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.176 [2024-07-14 10:44:02.136008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.176 [2024-07-14 10:44:02.136016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.176 [2024-07-14 10:44:02.145219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.176 [2024-07-14 10:44:02.145244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.176 [2024-07-14 10:44:02.145252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.176 [2024-07-14 10:44:02.154268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.176 [2024-07-14 10:44:02.154289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.435 [2024-07-14 10:44:02.154298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.435 [2024-07-14 10:44:02.164615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.435 [2024-07-14 10:44:02.164636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.435 [2024-07-14 10:44:02.164644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.435 [2024-07-14 10:44:02.174182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.435 [2024-07-14 10:44:02.174202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.435 [2024-07-14 10:44:02.174213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.435 [2024-07-14 10:44:02.185641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.435 [2024-07-14 10:44:02.185662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.435 [2024-07-14 10:44:02.185670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.435 [2024-07-14 10:44:02.194464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.435 [2024-07-14 10:44:02.194484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.435 [2024-07-14 10:44:02.194492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.435 [2024-07-14 10:44:02.204454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.435 [2024-07-14 10:44:02.204474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.435 [2024-07-14 10:44:02.204482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.435 [2024-07-14 10:44:02.213417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.435 [2024-07-14 10:44:02.213437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.435 [2024-07-14 10:44:02.213445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.435 [2024-07-14 10:44:02.223135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.435 [2024-07-14 10:44:02.223156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.435 [2024-07-14 10:44:02.223164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.435 [2024-07-14 10:44:02.233305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.435 [2024-07-14 10:44:02.233325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.435 [2024-07-14 10:44:02.233333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.435 [2024-07-14 10:44:02.243336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.435 [2024-07-14 10:44:02.243356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.435 [2024-07-14 10:44:02.243364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.435 [2024-07-14 10:44:02.251859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.435 [2024-07-14 10:44:02.251879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.435 [2024-07-14 10:44:02.251887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.435 [2024-07-14 10:44:02.262242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.435 [2024-07-14 10:44:02.262267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.435 [2024-07-14 10:44:02.262275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.435 [2024-07-14 10:44:02.272320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.435 [2024-07-14 10:44:02.272340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.435 [2024-07-14 10:44:02.272348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.435 [2024-07-14 10:44:02.281613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.435 [2024-07-14 10:44:02.281633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.435 [2024-07-14 10:44:02.281641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.435 [2024-07-14 10:44:02.290036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.435 [2024-07-14 10:44:02.290056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.435 [2024-07-14 10:44:02.290064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.435 [2024-07-14 10:44:02.301766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.435 [2024-07-14 10:44:02.301788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.436 [2024-07-14 10:44:02.301796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.436 [2024-07-14 10:44:02.309458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.436 [2024-07-14 10:44:02.309478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.436 [2024-07-14 10:44:02.309486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.436 [2024-07-14 10:44:02.321238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.436 [2024-07-14 10:44:02.321259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.436 [2024-07-14 10:44:02.321267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.436 [2024-07-14 10:44:02.333555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.436 [2024-07-14 10:44:02.333575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.436 [2024-07-14 10:44:02.333584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.436 [2024-07-14 10:44:02.344393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.436 [2024-07-14 10:44:02.344413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.436 [2024-07-14 10:44:02.344425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.436 [2024-07-14 10:44:02.352656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.436 [2024-07-14 10:44:02.352676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.436 [2024-07-14 10:44:02.352685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.436 [2024-07-14 10:44:02.364329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.436 [2024-07-14 10:44:02.364350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.436 [2024-07-14 10:44:02.364358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.436 [2024-07-14 10:44:02.375344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.436 [2024-07-14 10:44:02.375366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.436 [2024-07-14 10:44:02.375374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.436 [2024-07-14 10:44:02.384550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.436 [2024-07-14 10:44:02.384572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.436 [2024-07-14 10:44:02.384583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.436 [2024-07-14 10:44:02.396329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.436 [2024-07-14 10:44:02.396350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.436 [2024-07-14 10:44:02.396358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.436 [2024-07-14 10:44:02.404989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.436 [2024-07-14 10:44:02.405009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.436 [2024-07-14 10:44:02.405017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.694 [2024-07-14 10:44:02.416341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.694 [2024-07-14 10:44:02.416364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.694 [2024-07-14 10:44:02.416372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.694 [2024-07-14 10:44:02.424842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.694 [2024-07-14 10:44:02.424862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.694 [2024-07-14 10:44:02.424871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.694 [2024-07-14 10:44:02.436076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.694 [2024-07-14 10:44:02.436102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.694 [2024-07-14 10:44:02.436110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.694 [2024-07-14 10:44:02.448006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.694 [2024-07-14 10:44:02.448027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.694 [2024-07-14 10:44:02.448036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.694 [2024-07-14 10:44:02.456895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.694 [2024-07-14 10:44:02.456916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.694 [2024-07-14 10:44:02.456924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.694 [2024-07-14 10:44:02.468162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.694 [2024-07-14 10:44:02.468183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.694 [2024-07-14 10:44:02.468191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.694 [2024-07-14 10:44:02.477417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.694 [2024-07-14 10:44:02.477437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.694 [2024-07-14 10:44:02.477445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.694 [2024-07-14 10:44:02.488728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.694 [2024-07-14 10:44:02.488749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.694 [2024-07-14 10:44:02.488758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.694 [2024-07-14 10:44:02.496930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.694 [2024-07-14 10:44:02.496951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.694 [2024-07-14 10:44:02.496959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.694 [2024-07-14 10:44:02.508839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.694 [2024-07-14 10:44:02.508860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.694 [2024-07-14 10:44:02.508869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.694 [2024-07-14 10:44:02.517452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.694 [2024-07-14 10:44:02.517472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.694 [2024-07-14 10:44:02.517480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.694 [2024-07-14 10:44:02.529108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.694 [2024-07-14 10:44:02.529131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.694 [2024-07-14 10:44:02.529139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.694 [2024-07-14 10:44:02.539624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.694 [2024-07-14 10:44:02.539645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.694 [2024-07-14 10:44:02.539653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.694 [2024-07-14 10:44:02.548676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.694 [2024-07-14 10:44:02.548698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.694 [2024-07-14 10:44:02.548706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.694 [2024-07-14 10:44:02.559351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.694 [2024-07-14 10:44:02.559373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.694 [2024-07-14 10:44:02.559381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.694 [2024-07-14 10:44:02.568021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.694 [2024-07-14 10:44:02.568043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.694 [2024-07-14 10:44:02.568053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.694 [2024-07-14 10:44:02.578382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.694 [2024-07-14 10:44:02.578404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.694 [2024-07-14 10:44:02.578413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.694 [2024-07-14 10:44:02.586849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.694 [2024-07-14 10:44:02.586871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.694 [2024-07-14 10:44:02.586879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.694 [2024-07-14 10:44:02.599341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.694 [2024-07-14 10:44:02.599361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.694 [2024-07-14 10:44:02.599369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.694 [2024-07-14 10:44:02.607895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.694 [2024-07-14 10:44:02.607916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.694 [2024-07-14 10:44:02.607928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.694 [2024-07-14 10:44:02.618492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.694 [2024-07-14 10:44:02.618513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.694 [2024-07-14 10:44:02.618521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.694 [2024-07-14 10:44:02.629267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.694 [2024-07-14 10:44:02.629288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.694 [2024-07-14 10:44:02.629296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.694 [2024-07-14 10:44:02.638707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.694 [2024-07-14 10:44:02.638728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.694 [2024-07-14 10:44:02.638737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.694 [2024-07-14 10:44:02.647933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.695 [2024-07-14 10:44:02.647954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.695 [2024-07-14 10:44:02.647963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.695 [2024-07-14 10:44:02.657591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.695 [2024-07-14 10:44:02.657613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.695 [2024-07-14 10:44:02.657621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.695 [2024-07-14 10:44:02.665865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.695 [2024-07-14 10:44:02.665885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.695 [2024-07-14 10:44:02.665893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.952 [2024-07-14 10:44:02.675883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.952 [2024-07-14 10:44:02.675906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.952 [2024-07-14 10:44:02.675914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.952 [2024-07-14 10:44:02.685780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.952 [2024-07-14 10:44:02.685800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.952 [2024-07-14 10:44:02.685809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.952 [2024-07-14 10:44:02.694349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.953 [2024-07-14 10:44:02.694373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.953 [2024-07-14 10:44:02.694382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.953 [2024-07-14 10:44:02.704715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.953 [2024-07-14 10:44:02.704736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.953 [2024-07-14 10:44:02.704745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.953 [2024-07-14 10:44:02.713285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.953 [2024-07-14 10:44:02.713305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.953 [2024-07-14 10:44:02.713314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.953 [2024-07-14 10:44:02.723948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.953 [2024-07-14 10:44:02.723969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.953 [2024-07-14 10:44:02.723977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.953 [2024-07-14 10:44:02.734163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.953 [2024-07-14 10:44:02.734184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.953 [2024-07-14 10:44:02.734192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.953 [2024-07-14 10:44:02.744899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.953 [2024-07-14 10:44:02.744920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.953 [2024-07-14 10:44:02.744929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.953 [2024-07-14 10:44:02.753434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.953 [2024-07-14 10:44:02.753453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.953 [2024-07-14 10:44:02.753462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.953 [2024-07-14 10:44:02.763529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.953 [2024-07-14 10:44:02.763549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.953 [2024-07-14 10:44:02.763558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.953 [2024-07-14 10:44:02.772333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.953 [2024-07-14 10:44:02.772354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.953 [2024-07-14 10:44:02.772362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.953 [2024-07-14 10:44:02.783531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.953 [2024-07-14 10:44:02.783551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.953 [2024-07-14 10:44:02.783559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.953 [2024-07-14 10:44:02.795642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.953 [2024-07-14 10:44:02.795663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.953 [2024-07-14 10:44:02.795671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.953 [2024-07-14 10:44:02.804053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.953 [2024-07-14 10:44:02.804073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.953 [2024-07-14 10:44:02.804082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.953 [2024-07-14 10:44:02.815360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.953 [2024-07-14 10:44:02.815381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.953 [2024-07-14 10:44:02.815389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.953 [2024-07-14 10:44:02.825857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17729a0) 00:35:17.953 [2024-07-14 10:44:02.825878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.953 [2024-07-14 10:44:02.825887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.953 00:35:17.953 Latency(us) 00:35:17.953 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:17.953 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:17.953 nvme0n1 : 2.04 24607.09 96.12 0.00 0.00 5093.56 2550.21 47185.92 00:35:17.953 =================================================================================================================== 00:35:17.953 Total : 24607.09 96.12 0.00 0.00 5093.56 2550.21 47185.92 00:35:17.953 0 00:35:17.953 10:44:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:17.953 10:44:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:17.953 10:44:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:17.953 | .driver_specific 00:35:17.953 | .nvme_error 00:35:17.953 | .status_code 00:35:17.953 | .command_transient_transport_error' 00:35:17.953 10:44:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:18.212 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 197 > 0 )) 00:35:18.212 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2617349 00:35:18.212 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2617349 ']' 00:35:18.212 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2617349 00:35:18.212 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:35:18.212 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:18.212 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2617349 00:35:18.212 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:18.212 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:18.212 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2617349' 00:35:18.212 killing process with pid 2617349 00:35:18.212 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2617349 00:35:18.212 Received shutdown signal, test time was about 2.000000 seconds 00:35:18.212 00:35:18.212 Latency(us) 00:35:18.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.212 =================================================================================================================== 00:35:18.212 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:18.212 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2617349 00:35:18.470 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:18.470 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:18.470 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:18.470 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:18.470 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:18.470 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2617832 00:35:18.470 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2617832 /var/tmp/bperf.sock 00:35:18.470 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:18.470 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2617832 ']' 00:35:18.470 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:18.470 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:18.470 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:18.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:18.470 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:18.470 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:18.470 [2024-07-14 10:44:03.327664] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:18.470 [2024-07-14 10:44:03.327711] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2617832 ] 00:35:18.470 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:18.470 Zero copy mechanism will not be used. 00:35:18.470 EAL: No free 2048 kB hugepages reported on node 1 00:35:18.470 [2024-07-14 10:44:03.394117] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.470 [2024-07-14 10:44:03.432909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:18.729 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:18.729 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:35:18.729 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:18.729 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:18.729 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:18.729 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.729 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:18.988 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.988 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:18.988 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:18.988 nvme0n1 00:35:18.988 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:18.988 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.988 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.246 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.246 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:19.246 10:44:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:19.246 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:19.246 Zero copy mechanism will not be used. 00:35:19.246 Running I/O for 2 seconds... 00:35:19.246 [2024-07-14 10:44:04.059747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.246 [2024-07-14 10:44:04.059779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.246 [2024-07-14 10:44:04.059790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.246 [2024-07-14 10:44:04.065886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.246 [2024-07-14 10:44:04.065912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.246 [2024-07-14 10:44:04.065921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.246 [2024-07-14 10:44:04.072110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.246 [2024-07-14 10:44:04.072132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.246 [2024-07-14 10:44:04.072141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.246 [2024-07-14 10:44:04.078446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.246 [2024-07-14 10:44:04.078467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.246 [2024-07-14 10:44:04.078475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.246 [2024-07-14 10:44:04.084776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.246 [2024-07-14 10:44:04.084801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.246 [2024-07-14 10:44:04.084809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.246 [2024-07-14 10:44:04.090780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.246 [2024-07-14 10:44:04.090801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.246 [2024-07-14 10:44:04.090809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.246 [2024-07-14 10:44:04.096910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.247 [2024-07-14 10:44:04.096932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.247 [2024-07-14 10:44:04.096939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.247 [2024-07-14 10:44:04.102671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.247 [2024-07-14 10:44:04.102693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.247 [2024-07-14 10:44:04.102701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.247 [2024-07-14 10:44:04.108451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.247 [2024-07-14 10:44:04.108472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.247 [2024-07-14 10:44:04.108480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.247 [2024-07-14 10:44:04.114473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.247 [2024-07-14 10:44:04.114495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.247 [2024-07-14 10:44:04.114504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.247 [2024-07-14 10:44:04.120212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.247 [2024-07-14 10:44:04.120240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.247 [2024-07-14 10:44:04.120248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.247 [2024-07-14 10:44:04.125859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.247 [2024-07-14 10:44:04.125879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.247 [2024-07-14 10:44:04.125887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.247 [2024-07-14 10:44:04.131333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.247 [2024-07-14 10:44:04.131353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.247 [2024-07-14 10:44:04.131361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.247 [2024-07-14 10:44:04.136904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.247 [2024-07-14 10:44:04.136925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.247 [2024-07-14 10:44:04.136932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.247 [2024-07-14 10:44:04.142206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.247 [2024-07-14 10:44:04.142233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.247 [2024-07-14 10:44:04.142242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.247 [2024-07-14 10:44:04.147704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.247 [2024-07-14 10:44:04.147725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.247 [2024-07-14 10:44:04.147733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.247 [2024-07-14 10:44:04.152998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.247 [2024-07-14 10:44:04.153018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.247 [2024-07-14 10:44:04.153026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.247 [2024-07-14 10:44:04.158541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.247 [2024-07-14 10:44:04.158562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.247 [2024-07-14 10:44:04.158569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.247 [2024-07-14 10:44:04.164144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.247 [2024-07-14 10:44:04.164166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.247 [2024-07-14 10:44:04.164174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.247 [2024-07-14 10:44:04.169686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.247 [2024-07-14 10:44:04.169707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.247 [2024-07-14 10:44:04.169714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.247 [2024-07-14 10:44:04.175260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.247 [2024-07-14 10:44:04.175281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.247 [2024-07-14 10:44:04.175288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.247 [2024-07-14 10:44:04.180757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.247 [2024-07-14 10:44:04.180782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.247 [2024-07-14 10:44:04.180791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.247 [2024-07-14 10:44:04.186183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.247 [2024-07-14 10:44:04.186203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.247 [2024-07-14 10:44:04.186211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.247 [2024-07-14 10:44:04.191722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.247 [2024-07-14 10:44:04.191743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.247 [2024-07-14 10:44:04.191751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.247 [2024-07-14 10:44:04.197281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.247 [2024-07-14 10:44:04.197302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.247 [2024-07-14 10:44:04.197311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.247 [2024-07-14 10:44:04.202802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.247 [2024-07-14 10:44:04.202822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.247 [2024-07-14 10:44:04.202830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.247 [2024-07-14 10:44:04.208371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.247 [2024-07-14 10:44:04.208392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.247 [2024-07-14 10:44:04.208400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.247 [2024-07-14 10:44:04.213917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.247 [2024-07-14 10:44:04.213938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.247 [2024-07-14 10:44:04.213946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.247 [2024-07-14 10:44:04.219306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.247 [2024-07-14 10:44:04.219328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.247 [2024-07-14 10:44:04.219336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.247 [2024-07-14 10:44:04.224773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.247 [2024-07-14 10:44:04.224794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.247 [2024-07-14 10:44:04.224802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.506 [2024-07-14 10:44:04.230212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.506 [2024-07-14 10:44:04.230240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.506 [2024-07-14 10:44:04.230248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.506 [2024-07-14 10:44:04.235576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.507 [2024-07-14 10:44:04.235598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.507 [2024-07-14 10:44:04.235607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.507 [2024-07-14 10:44:04.241152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.507 [2024-07-14 10:44:04.241173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.507 [2024-07-14 10:44:04.241182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.507 [2024-07-14 10:44:04.246779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.507 [2024-07-14 10:44:04.246799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.507 [2024-07-14 10:44:04.246807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.507 [2024-07-14 10:44:04.252517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.507 [2024-07-14 10:44:04.252539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.507 [2024-07-14 10:44:04.252547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.507 [2024-07-14 10:44:04.257750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.507 [2024-07-14 10:44:04.257772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.507 [2024-07-14 10:44:04.257781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.507 [2024-07-14 10:44:04.263826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.507 [2024-07-14 10:44:04.263849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.507 [2024-07-14 10:44:04.263857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.507 [2024-07-14 10:44:04.271030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.507 [2024-07-14 10:44:04.271054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.507 [2024-07-14 10:44:04.271063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.507 [2024-07-14 10:44:04.278915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.507 [2024-07-14 10:44:04.278939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.507 [2024-07-14 10:44:04.278952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.507 [2024-07-14 10:44:04.287320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.507 [2024-07-14 10:44:04.287344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.507 [2024-07-14 10:44:04.287352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.507 [2024-07-14 10:44:04.294980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.507 [2024-07-14 10:44:04.295004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.507 [2024-07-14 10:44:04.295012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.507 [2024-07-14 10:44:04.301321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.507 [2024-07-14 10:44:04.301343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.507 [2024-07-14 10:44:04.301351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.507 [2024-07-14 10:44:04.307467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.507 [2024-07-14 10:44:04.307489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.507 [2024-07-14 10:44:04.307497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.507 [2024-07-14 10:44:04.313522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.507 [2024-07-14 10:44:04.313545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.507 [2024-07-14 10:44:04.313553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.507 [2024-07-14 10:44:04.319467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.507 [2024-07-14 10:44:04.319489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.507 [2024-07-14 10:44:04.319498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.507 [2024-07-14 10:44:04.325612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.507 [2024-07-14 10:44:04.325635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.507 [2024-07-14 10:44:04.325643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.507 [2024-07-14 10:44:04.331312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.507 [2024-07-14 10:44:04.331334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.507 [2024-07-14 10:44:04.331342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.507 [2024-07-14 10:44:04.337467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.507 [2024-07-14 10:44:04.337494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.507 [2024-07-14 10:44:04.337503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.507 [2024-07-14 10:44:04.343370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.507 [2024-07-14 10:44:04.343393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.507 [2024-07-14 10:44:04.343401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.507 [2024-07-14 10:44:04.349354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.507 [2024-07-14 10:44:04.349376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.507 [2024-07-14 10:44:04.349385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.507 [2024-07-14 10:44:04.355431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.507 [2024-07-14 10:44:04.355453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.507 [2024-07-14 10:44:04.355462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.507 [2024-07-14 10:44:04.361250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.507 [2024-07-14 10:44:04.361272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.507 [2024-07-14 10:44:04.361280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.507 [2024-07-14 10:44:04.367040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.507 [2024-07-14 10:44:04.367062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.507 [2024-07-14 10:44:04.367070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.507 [2024-07-14 10:44:04.373174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.507 [2024-07-14 10:44:04.373196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.507 [2024-07-14 10:44:04.373204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.507 [2024-07-14 10:44:04.379633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.507 [2024-07-14 10:44:04.379655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.507 [2024-07-14 10:44:04.379663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.507 [2024-07-14 10:44:04.385499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.507 [2024-07-14 10:44:04.385522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.507 [2024-07-14 10:44:04.385530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.507 [2024-07-14 10:44:04.391526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.507 [2024-07-14 10:44:04.391550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.507 [2024-07-14 10:44:04.391559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.507 [2024-07-14 10:44:04.397632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.507 [2024-07-14 10:44:04.397656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.507 [2024-07-14 10:44:04.397665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.507 [2024-07-14 10:44:04.403305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.507 [2024-07-14 10:44:04.403328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.507 [2024-07-14 10:44:04.403336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.507 [2024-07-14 10:44:04.409286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.507 [2024-07-14 10:44:04.409308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.507 [2024-07-14 10:44:04.409317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.507 [2024-07-14 10:44:04.415208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.508 [2024-07-14 10:44:04.415239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.508 [2024-07-14 10:44:04.415248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.508 [2024-07-14 10:44:04.421090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.508 [2024-07-14 10:44:04.421114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.508 [2024-07-14 10:44:04.421124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.508 [2024-07-14 10:44:04.426974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.508 [2024-07-14 10:44:04.426997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.508 [2024-07-14 10:44:04.427005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.508 [2024-07-14 10:44:04.432962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.508 [2024-07-14 10:44:04.432986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.508 [2024-07-14 10:44:04.432994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.508 [2024-07-14 10:44:04.438746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.508 [2024-07-14 10:44:04.438768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.508 [2024-07-14 10:44:04.438779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.508 [2024-07-14 10:44:04.444683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.508 [2024-07-14 10:44:04.444705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.508 [2024-07-14 10:44:04.444714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.508 [2024-07-14 10:44:04.450673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.508 [2024-07-14 10:44:04.450694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.508 [2024-07-14 10:44:04.450702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.508 [2024-07-14 10:44:04.456682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.508 [2024-07-14 10:44:04.456704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.508 [2024-07-14 10:44:04.456712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.508 [2024-07-14 10:44:04.461734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.508 [2024-07-14 10:44:04.461756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.508 [2024-07-14 10:44:04.461764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.508 [2024-07-14 10:44:04.465525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.508 [2024-07-14 10:44:04.465546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.508 [2024-07-14 10:44:04.465554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.508 [2024-07-14 10:44:04.471347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.508 [2024-07-14 10:44:04.471367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.508 [2024-07-14 10:44:04.471375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.508 [2024-07-14 10:44:04.477031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.508 [2024-07-14 10:44:04.477051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.508 [2024-07-14 10:44:04.477059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.508 [2024-07-14 10:44:04.483411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.508 [2024-07-14 10:44:04.483431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.508 [2024-07-14 10:44:04.483439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.766 [2024-07-14 10:44:04.489520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.766 [2024-07-14 10:44:04.489542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.766 [2024-07-14 10:44:04.489551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.766 [2024-07-14 10:44:04.495240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.766 [2024-07-14 10:44:04.495278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.766 [2024-07-14 10:44:04.495286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.766 [2024-07-14 10:44:04.501935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.766 [2024-07-14 10:44:04.501956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.766 [2024-07-14 10:44:04.501964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.766 [2024-07-14 10:44:04.507205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.766 [2024-07-14 10:44:04.507231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.766 [2024-07-14 10:44:04.507240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.766 [2024-07-14 10:44:04.513675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.766 [2024-07-14 10:44:04.513695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.766 [2024-07-14 10:44:04.513703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.766 [2024-07-14 10:44:04.519073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.766 [2024-07-14 10:44:04.519093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.766 [2024-07-14 10:44:04.519101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.766 [2024-07-14 10:44:04.525071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.766 [2024-07-14 10:44:04.525091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.766 [2024-07-14 10:44:04.525099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.766 [2024-07-14 10:44:04.530891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.766 [2024-07-14 10:44:04.530910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.766 [2024-07-14 10:44:04.530918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.766 [2024-07-14 10:44:04.536167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.766 [2024-07-14 10:44:04.536189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.766 [2024-07-14 10:44:04.536201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.766 [2024-07-14 10:44:04.542441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.766 [2024-07-14 10:44:04.542464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.766 [2024-07-14 10:44:04.542473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.766 [2024-07-14 10:44:04.548755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.766 [2024-07-14 10:44:04.548779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.766 [2024-07-14 10:44:04.548787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.767 [2024-07-14 10:44:04.556010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.767 [2024-07-14 10:44:04.556033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.767 [2024-07-14 10:44:04.556041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.767 [2024-07-14 10:44:04.563165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.767 [2024-07-14 10:44:04.563188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.767 [2024-07-14 10:44:04.563196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.767 [2024-07-14 10:44:04.571151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.767 [2024-07-14 10:44:04.571174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.767 [2024-07-14 10:44:04.571182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.767 [2024-07-14 10:44:04.579252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.767 [2024-07-14 10:44:04.579275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.767 [2024-07-14 10:44:04.579283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.767 [2024-07-14 10:44:04.587621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.767 [2024-07-14 10:44:04.587644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.767 [2024-07-14 10:44:04.587653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.767 [2024-07-14 10:44:04.595973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.767 [2024-07-14 10:44:04.595995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.767 [2024-07-14 10:44:04.596003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.767 [2024-07-14 10:44:04.605066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.767 [2024-07-14 10:44:04.605092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.767 [2024-07-14 10:44:04.605100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.767 [2024-07-14 10:44:04.613345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.767 [2024-07-14 10:44:04.613369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.767 [2024-07-14 10:44:04.613377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.767 [2024-07-14 10:44:04.621834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.767 [2024-07-14 10:44:04.621856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.767 [2024-07-14 10:44:04.621865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.767 [2024-07-14 10:44:04.629931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.767 [2024-07-14 10:44:04.629954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.767 [2024-07-14 10:44:04.629962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.767 [2024-07-14 10:44:04.638384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.767 [2024-07-14 10:44:04.638407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.767 [2024-07-14 10:44:04.638416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.767 [2024-07-14 10:44:04.646922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.767 [2024-07-14 10:44:04.646945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.767 [2024-07-14 10:44:04.646954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.767 [2024-07-14 10:44:04.655267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.767 [2024-07-14 10:44:04.655290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.767 [2024-07-14 10:44:04.655299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.767 [2024-07-14 10:44:04.664324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.767 [2024-07-14 10:44:04.664346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.767 [2024-07-14 10:44:04.664355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.767 [2024-07-14 10:44:04.672059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.767 [2024-07-14 10:44:04.672083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.767 [2024-07-14 10:44:04.672092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.767 [2024-07-14 10:44:04.678797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.767 [2024-07-14 10:44:04.678820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.767 [2024-07-14 10:44:04.678828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.767 [2024-07-14 10:44:04.685823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.767 [2024-07-14 10:44:04.685845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.767 [2024-07-14 10:44:04.685853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.767 [2024-07-14 10:44:04.692255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.767 [2024-07-14 10:44:04.692276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.767 [2024-07-14 10:44:04.692285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.767 [2024-07-14 10:44:04.698358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.767 [2024-07-14 10:44:04.698380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.767 [2024-07-14 10:44:04.698388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.767 [2024-07-14 10:44:04.704476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.767 [2024-07-14 10:44:04.704498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.767 [2024-07-14 10:44:04.704507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.767 [2024-07-14 10:44:04.710506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.767 [2024-07-14 10:44:04.710528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.767 [2024-07-14 10:44:04.710536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.767 [2024-07-14 10:44:04.716613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.767 [2024-07-14 10:44:04.716636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.767 [2024-07-14 10:44:04.716644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.767 [2024-07-14 10:44:04.722291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.767 [2024-07-14 10:44:04.722313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.767 [2024-07-14 10:44:04.722321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.767 [2024-07-14 10:44:04.727592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.767 [2024-07-14 10:44:04.727614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.767 [2024-07-14 10:44:04.727625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.767 [2024-07-14 10:44:04.733068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.767 [2024-07-14 10:44:04.733090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.767 [2024-07-14 10:44:04.733098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.767 [2024-07-14 10:44:04.738602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.767 [2024-07-14 10:44:04.738624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.767 [2024-07-14 10:44:04.738632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.767 [2024-07-14 10:44:04.744004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:19.767 [2024-07-14 10:44:04.744026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.767 [2024-07-14 10:44:04.744035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.749487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.749509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.749517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.754858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.754880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.754888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.760346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.760368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.760377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.765790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.765812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.765820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.771318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.771340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.771348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.776852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.776877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.776884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.782167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.782189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.782197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.787542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.787563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.787571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.792923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.792944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.792952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.795897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.795918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.795926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.801397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.801419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.801427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.806935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.806956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.806964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.812209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.812236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.812245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.817746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.817767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.817775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.823493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.823515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.823523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.829547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.829569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.829578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.836409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.836431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.836440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.842242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.842265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.842274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.848053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.848074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.848082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.851331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.851352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.851360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.856893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.856915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.856923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.862080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.862102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.862110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.867880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.867902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.867914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.874102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.874123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.874131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.879820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.879843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.879851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.885789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.885811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.885819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.891734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.891757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.891765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.897570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.897592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.897600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.903517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.903539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.903548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.909409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.026 [2024-07-14 10:44:04.909432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.026 [2024-07-14 10:44:04.909440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.026 [2024-07-14 10:44:04.915178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.027 [2024-07-14 10:44:04.915201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.027 [2024-07-14 10:44:04.915209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.027 [2024-07-14 10:44:04.920249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.027 [2024-07-14 10:44:04.920286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.027 [2024-07-14 10:44:04.920294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.027 [2024-07-14 10:44:04.923586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.027 [2024-07-14 10:44:04.923606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.027 [2024-07-14 10:44:04.923614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.027 [2024-07-14 10:44:04.929208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.027 [2024-07-14 10:44:04.929235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.027 [2024-07-14 10:44:04.929243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.027 [2024-07-14 10:44:04.934360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.027 [2024-07-14 10:44:04.934382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.027 [2024-07-14 10:44:04.934390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.027 [2024-07-14 10:44:04.939876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.027 [2024-07-14 10:44:04.939898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.027 [2024-07-14 10:44:04.939906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.027 [2024-07-14 10:44:04.944870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.027 [2024-07-14 10:44:04.944893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.027 [2024-07-14 10:44:04.944900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.027 [2024-07-14 10:44:04.950138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.027 [2024-07-14 10:44:04.950159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.027 [2024-07-14 10:44:04.950166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.027 [2024-07-14 10:44:04.955237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.027 [2024-07-14 10:44:04.955258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.027 [2024-07-14 10:44:04.955266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.027 [2024-07-14 10:44:04.960432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.027 [2024-07-14 10:44:04.960455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.027 [2024-07-14 10:44:04.960467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.027 [2024-07-14 10:44:04.965376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.027 [2024-07-14 10:44:04.965398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.027 [2024-07-14 10:44:04.965406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.027 [2024-07-14 10:44:04.970627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.027 [2024-07-14 10:44:04.970649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.027 [2024-07-14 10:44:04.970657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.027 [2024-07-14 10:44:04.975862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.027 [2024-07-14 10:44:04.975885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.027 [2024-07-14 10:44:04.975893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.027 [2024-07-14 10:44:04.980394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.027 [2024-07-14 10:44:04.980414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.027 [2024-07-14 10:44:04.980422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.027 [2024-07-14 10:44:04.983533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.027 [2024-07-14 10:44:04.983554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.027 [2024-07-14 10:44:04.983562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.027 [2024-07-14 10:44:04.988662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.027 [2024-07-14 10:44:04.988683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.027 [2024-07-14 10:44:04.988691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.027 [2024-07-14 10:44:04.993850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.027 [2024-07-14 10:44:04.993871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.027 [2024-07-14 10:44:04.993879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.027 [2024-07-14 10:44:04.998977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.027 [2024-07-14 10:44:04.998997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.027 [2024-07-14 10:44:04.999005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.027 [2024-07-14 10:44:05.004275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.027 [2024-07-14 10:44:05.004300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.027 [2024-07-14 10:44:05.004308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.285 [2024-07-14 10:44:05.009570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.285 [2024-07-14 10:44:05.009591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.285 [2024-07-14 10:44:05.009599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.285 [2024-07-14 10:44:05.014835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.285 [2024-07-14 10:44:05.014856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.285 [2024-07-14 10:44:05.014864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.285 [2024-07-14 10:44:05.020115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.285 [2024-07-14 10:44:05.020136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.285 [2024-07-14 10:44:05.020144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.285 [2024-07-14 10:44:05.025350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.285 [2024-07-14 10:44:05.025370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.286 [2024-07-14 10:44:05.025378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.286 [2024-07-14 10:44:05.030763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.286 [2024-07-14 10:44:05.030783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.286 [2024-07-14 10:44:05.030792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.286 [2024-07-14 10:44:05.036074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.286 [2024-07-14 10:44:05.036095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.286 [2024-07-14 10:44:05.036103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.286 [2024-07-14 10:44:05.041355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.286 [2024-07-14 10:44:05.041375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.286 [2024-07-14 10:44:05.041383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.286 [2024-07-14 10:44:05.046798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.286 [2024-07-14 10:44:05.046819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.286 [2024-07-14 10:44:05.046827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.286 [2024-07-14 10:44:05.052212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.286 [2024-07-14 10:44:05.052237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.286 [2024-07-14 10:44:05.052246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.286 [2024-07-14 10:44:05.057620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.286 [2024-07-14 10:44:05.057641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.286 [2024-07-14 10:44:05.057649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.286 [2024-07-14 10:44:05.062960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.286 [2024-07-14 10:44:05.062980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.286 [2024-07-14 10:44:05.062987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.286 [2024-07-14 10:44:05.068309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.286 [2024-07-14 10:44:05.068330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.286 [2024-07-14 10:44:05.068338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.286 [2024-07-14 10:44:05.073619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.286 [2024-07-14 10:44:05.073639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.286 [2024-07-14 10:44:05.073647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.286 [2024-07-14 10:44:05.078930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.286 [2024-07-14 10:44:05.078950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.286 [2024-07-14 10:44:05.078958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.286 [2024-07-14 10:44:05.084213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.286 [2024-07-14 10:44:05.084238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.286 [2024-07-14 10:44:05.084246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.286 [2024-07-14 10:44:05.089613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.286 [2024-07-14 10:44:05.089632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.286 [2024-07-14 10:44:05.089640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.286 [2024-07-14 10:44:05.095101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.286 [2024-07-14 10:44:05.095121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.286 [2024-07-14 10:44:05.095132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.286 [2024-07-14 10:44:05.100496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.286 [2024-07-14 10:44:05.100517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.286 [2024-07-14 10:44:05.100524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.286 [2024-07-14 10:44:05.105811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.286 [2024-07-14 10:44:05.105832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.286 [2024-07-14 10:44:05.105840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.286 [2024-07-14 10:44:05.111222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.286 [2024-07-14 10:44:05.111248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.286 [2024-07-14 10:44:05.111256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.286 [2024-07-14 10:44:05.116576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.286 [2024-07-14 10:44:05.116597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.286 [2024-07-14 10:44:05.116605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.286 [2024-07-14 10:44:05.121893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.286 [2024-07-14 10:44:05.121913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.286 [2024-07-14 10:44:05.121922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.286 [2024-07-14 10:44:05.127233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.286 [2024-07-14 10:44:05.127252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.286 [2024-07-14 10:44:05.127261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.286 [2024-07-14 10:44:05.132594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.286 [2024-07-14 10:44:05.132614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.286 [2024-07-14 10:44:05.132621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.286 [2024-07-14 10:44:05.137890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.286 [2024-07-14 10:44:05.137910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.286 [2024-07-14 10:44:05.137918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.286 [2024-07-14 10:44:05.143202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.286 [2024-07-14 10:44:05.143230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.286 [2024-07-14 10:44:05.143239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.286 [2024-07-14 10:44:05.148681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.286 [2024-07-14 10:44:05.148701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.286 [2024-07-14 10:44:05.148708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.286 [2024-07-14 10:44:05.154158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.286 [2024-07-14 10:44:05.154178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.286 [2024-07-14 10:44:05.154186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.286 [2024-07-14 10:44:05.159773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.286 [2024-07-14 10:44:05.159792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.286 [2024-07-14 10:44:05.159800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.286 [2024-07-14 10:44:05.165308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.286 [2024-07-14 10:44:05.165328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.286 [2024-07-14 10:44:05.165336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.286 [2024-07-14 10:44:05.170687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.286 [2024-07-14 10:44:05.170708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.286 [2024-07-14 10:44:05.170715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.286 [2024-07-14 10:44:05.176087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.286 [2024-07-14 10:44:05.176107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.286 [2024-07-14 10:44:05.176115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.286 [2024-07-14 10:44:05.181473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.286 [2024-07-14 10:44:05.181494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.287 [2024-07-14 10:44:05.181502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.287 [2024-07-14 10:44:05.186878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.287 [2024-07-14 10:44:05.186898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.287 [2024-07-14 10:44:05.186905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.287 [2024-07-14 10:44:05.192323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.287 [2024-07-14 10:44:05.192344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.287 [2024-07-14 10:44:05.192351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.287 [2024-07-14 10:44:05.197712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.287 [2024-07-14 10:44:05.197732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.287 [2024-07-14 10:44:05.197740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.287 [2024-07-14 10:44:05.203167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.287 [2024-07-14 10:44:05.203188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.287 [2024-07-14 10:44:05.203196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.287 [2024-07-14 10:44:05.208440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.287 [2024-07-14 10:44:05.208460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.287 [2024-07-14 10:44:05.208468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.287 [2024-07-14 10:44:05.213727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.287 [2024-07-14 10:44:05.213747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.287 [2024-07-14 10:44:05.213756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.287 [2024-07-14 10:44:05.219003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.287 [2024-07-14 10:44:05.219024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.287 [2024-07-14 10:44:05.219031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.287 [2024-07-14 10:44:05.224347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.287 [2024-07-14 10:44:05.224367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.287 [2024-07-14 10:44:05.224375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.287 [2024-07-14 10:44:05.229844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.287 [2024-07-14 10:44:05.229865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.287 [2024-07-14 10:44:05.229873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.287 [2024-07-14 10:44:05.235301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.287 [2024-07-14 10:44:05.235322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.287 [2024-07-14 10:44:05.235334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.287 [2024-07-14 10:44:05.240728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.287 [2024-07-14 10:44:05.240748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.287 [2024-07-14 10:44:05.240756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.287 [2024-07-14 10:44:05.246041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.287 [2024-07-14 10:44:05.246062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.287 [2024-07-14 10:44:05.246070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.287 [2024-07-14 10:44:05.251345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.287 [2024-07-14 10:44:05.251366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.287 [2024-07-14 10:44:05.251374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.287 [2024-07-14 10:44:05.256593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.287 [2024-07-14 10:44:05.256614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.287 [2024-07-14 10:44:05.256622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.287 [2024-07-14 10:44:05.261850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.287 [2024-07-14 10:44:05.261871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.287 [2024-07-14 10:44:05.261879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.267221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.267249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.267257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.272588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.272609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.272618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.278151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.278171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.278179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.283729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.283750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.283758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.289393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.289413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.289421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.294982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.295003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.295010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.300370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.300390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.300398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.305808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.305828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.305836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.311221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.311249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.311257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.316681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.316701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.316709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.322090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.322110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.322118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.327469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.327489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.327500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.332725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.332745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.332753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.338063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.338084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.338092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.343491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.343511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.343519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.348870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.348890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.348898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.354253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.354274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.354282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.359662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.359682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.359690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.365138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.365159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.365167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.370487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.370508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.370516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.375798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.375826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.375834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.381193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.381214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.381222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.386514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.386534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.386542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.391968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.391989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.391996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.397386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.397406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.397414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.402794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.402815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.402822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.408135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.408155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.408163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.413401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.413421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.413429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.418726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.418746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.418754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.424070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.424091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.424099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.429538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.429560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.429571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.435088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.435111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.435119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.440494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.440518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.440526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.445975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.445997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.446005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.451288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.451310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.451319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.456581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.456602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.456610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.461950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.461971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.461979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.467284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.467309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.467317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.472718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.472741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.472749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.478173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.478194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.478202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.483604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.483625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.483633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.489126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.489148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.489156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.494533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.494553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.494562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.499930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.499952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.499959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.505479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.505500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.505508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.511156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.511176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.511185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.516923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.516944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.516952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.546 [2024-07-14 10:44:05.522535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.546 [2024-07-14 10:44:05.522556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.546 [2024-07-14 10:44:05.522565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.805 [2024-07-14 10:44:05.528209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.805 [2024-07-14 10:44:05.528238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.805 [2024-07-14 10:44:05.528247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.805 [2024-07-14 10:44:05.533911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.805 [2024-07-14 10:44:05.533932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.805 [2024-07-14 10:44:05.533940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.805 [2024-07-14 10:44:05.539498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.805 [2024-07-14 10:44:05.539518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.539526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.545014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.545035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.545043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.550505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.550527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.550535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.556477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.556497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.556506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.563336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.563358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.563370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.570730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.570751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.570759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.578212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.578243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.578252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.587084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.587105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.587114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.594449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.594471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.594480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.602453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.602476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.602484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.610738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.610761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.610770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.619190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.619212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.619220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.627739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.627761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.627770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.636646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.636673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.636682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.645298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.645322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.645330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.654066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.654088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.654096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.662358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.662380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.662388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.670411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.670435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.670444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.677939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.677962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.677971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.685344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.685367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.685375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.693128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.693152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.693161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.699995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.700018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.700029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.708390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.708412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.708420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.716286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.716309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.716317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.723339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.723360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.723368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.729962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.729983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.729992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.737562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.737585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.737593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.745647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.745669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.745677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.753796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.753818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.753827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.761585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.761607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.761615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.770392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.770418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.770427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.806 [2024-07-14 10:44:05.777835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:20.806 [2024-07-14 10:44:05.777857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.806 [2024-07-14 10:44:05.777865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.066 [2024-07-14 10:44:05.785763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.066 [2024-07-14 10:44:05.785787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.066 [2024-07-14 10:44:05.785795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.066 [2024-07-14 10:44:05.794383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.066 [2024-07-14 10:44:05.794406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.066 [2024-07-14 10:44:05.794415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.066 [2024-07-14 10:44:05.801493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.066 [2024-07-14 10:44:05.801522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.066 [2024-07-14 10:44:05.801530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.066 [2024-07-14 10:44:05.807886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.066 [2024-07-14 10:44:05.807909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.066 [2024-07-14 10:44:05.807918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.066 [2024-07-14 10:44:05.813972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.066 [2024-07-14 10:44:05.813994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.066 [2024-07-14 10:44:05.814002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.066 [2024-07-14 10:44:05.819939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.066 [2024-07-14 10:44:05.819961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.066 [2024-07-14 10:44:05.819969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.066 [2024-07-14 10:44:05.826085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.066 [2024-07-14 10:44:05.826107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.067 [2024-07-14 10:44:05.826115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.067 [2024-07-14 10:44:05.832335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.067 [2024-07-14 10:44:05.832357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.067 [2024-07-14 10:44:05.832365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.067 [2024-07-14 10:44:05.838187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.067 [2024-07-14 10:44:05.838210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.067 [2024-07-14 10:44:05.838218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.067 [2024-07-14 10:44:05.844089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.067 [2024-07-14 10:44:05.844111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.067 [2024-07-14 10:44:05.844119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.067 [2024-07-14 10:44:05.850048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.067 [2024-07-14 10:44:05.850071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.067 [2024-07-14 10:44:05.850079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.067 [2024-07-14 10:44:05.856115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.067 [2024-07-14 10:44:05.856138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.067 [2024-07-14 10:44:05.856145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.067 [2024-07-14 10:44:05.862486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.067 [2024-07-14 10:44:05.862508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.067 [2024-07-14 10:44:05.862516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.067 [2024-07-14 10:44:05.868905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.067 [2024-07-14 10:44:05.868927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.067 [2024-07-14 10:44:05.868935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.067 [2024-07-14 10:44:05.874866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.067 [2024-07-14 10:44:05.874888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.067 [2024-07-14 10:44:05.874896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.067 [2024-07-14 10:44:05.880941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.067 [2024-07-14 10:44:05.880963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.067 [2024-07-14 10:44:05.880975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.067 [2024-07-14 10:44:05.887299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.067 [2024-07-14 10:44:05.887321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.067 [2024-07-14 10:44:05.887330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.067 [2024-07-14 10:44:05.893482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.067 [2024-07-14 10:44:05.893504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.067 [2024-07-14 10:44:05.893511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.067 [2024-07-14 10:44:05.899603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.067 [2024-07-14 10:44:05.899625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.067 [2024-07-14 10:44:05.899634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.067 [2024-07-14 10:44:05.905503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.067 [2024-07-14 10:44:05.905525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.067 [2024-07-14 10:44:05.905533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.067 [2024-07-14 10:44:05.911092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.067 [2024-07-14 10:44:05.911114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.067 [2024-07-14 10:44:05.911123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.067 [2024-07-14 10:44:05.916836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.067 [2024-07-14 10:44:05.916858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.067 [2024-07-14 10:44:05.916867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.067 [2024-07-14 10:44:05.923117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.067 [2024-07-14 10:44:05.923139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.067 [2024-07-14 10:44:05.923147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.067 [2024-07-14 10:44:05.929040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.067 [2024-07-14 10:44:05.929062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.067 [2024-07-14 10:44:05.929070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.067 [2024-07-14 10:44:05.934997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.067 [2024-07-14 10:44:05.935022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.067 [2024-07-14 10:44:05.935031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.067 [2024-07-14 10:44:05.941449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.067 [2024-07-14 10:44:05.941471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.067 [2024-07-14 10:44:05.941479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.067 [2024-07-14 10:44:05.947681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.067 [2024-07-14 10:44:05.947702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.067 [2024-07-14 10:44:05.947710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.067 [2024-07-14 10:44:05.954822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.067 [2024-07-14 10:44:05.954845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.067 [2024-07-14 10:44:05.954852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.067 [2024-07-14 10:44:05.962943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.067 [2024-07-14 10:44:05.962966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.067 [2024-07-14 10:44:05.962974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.067 [2024-07-14 10:44:05.970804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.067 [2024-07-14 10:44:05.970827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.067 [2024-07-14 10:44:05.970835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.067 [2024-07-14 10:44:05.979141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.067 [2024-07-14 10:44:05.979165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.067 [2024-07-14 10:44:05.979173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.067 [2024-07-14 10:44:05.986987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.067 [2024-07-14 10:44:05.987008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.067 [2024-07-14 10:44:05.987016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.067 [2024-07-14 10:44:05.994637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.067 [2024-07-14 10:44:05.994660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.067 [2024-07-14 10:44:05.994668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.067 [2024-07-14 10:44:06.002335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.067 [2024-07-14 10:44:06.002358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.067 [2024-07-14 10:44:06.002366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.067 [2024-07-14 10:44:06.011032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.067 [2024-07-14 10:44:06.011054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.067 [2024-07-14 10:44:06.011063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.068 [2024-07-14 10:44:06.019242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.068 [2024-07-14 10:44:06.019264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.068 [2024-07-14 10:44:06.019272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.068 [2024-07-14 10:44:06.027300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.068 [2024-07-14 10:44:06.027323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.068 [2024-07-14 10:44:06.027331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.068 [2024-07-14 10:44:06.035462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.068 [2024-07-14 10:44:06.035485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.068 [2024-07-14 10:44:06.035494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.068 [2024-07-14 10:44:06.043272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.068 [2024-07-14 10:44:06.043294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.068 [2024-07-14 10:44:06.043303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.327 [2024-07-14 10:44:06.052030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.327 [2024-07-14 10:44:06.052054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.327 [2024-07-14 10:44:06.052063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.327 [2024-07-14 10:44:06.060075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7e140) 00:35:21.327 [2024-07-14 10:44:06.060098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.327 [2024-07-14 10:44:06.060107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.327 00:35:21.327 Latency(us) 00:35:21.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:21.327 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:21.327 nvme0n1 : 2.00 5127.31 640.91 0.00 0.00 3117.30 669.61 9175.04 00:35:21.327 =================================================================================================================== 00:35:21.327 Total : 5127.31 640.91 0.00 0.00 3117.30 669.61 9175.04 00:35:21.327 0 00:35:21.327 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:21.327 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:21.327 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:21.327 | .driver_specific 00:35:21.327 | .nvme_error 00:35:21.327 | .status_code 00:35:21.327 | .command_transient_transport_error' 00:35:21.327 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:21.327 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 331 > 0 )) 00:35:21.327 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2617832 00:35:21.327 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2617832 ']' 00:35:21.327 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2617832 00:35:21.327 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:35:21.327 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:21.327 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2617832 00:35:21.586 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:21.586 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:21.586 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2617832' 00:35:21.586 killing process with pid 2617832 00:35:21.586 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2617832 00:35:21.586 Received shutdown signal, test time was about 2.000000 seconds 00:35:21.586 00:35:21.586 Latency(us) 00:35:21.586 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:21.586 =================================================================================================================== 00:35:21.586 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:21.586 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2617832 00:35:21.586 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:21.586 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:21.586 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:21.586 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:21.586 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:21.586 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2618308 00:35:21.586 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2618308 /var/tmp/bperf.sock 00:35:21.586 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:21.586 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2618308 ']' 00:35:21.586 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:21.586 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:21.586 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:21.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:21.586 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:21.586 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:21.586 [2024-07-14 10:44:06.530459] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:21.586 [2024-07-14 10:44:06.530503] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2618308 ] 00:35:21.586 EAL: No free 2048 kB hugepages reported on node 1 00:35:21.846 [2024-07-14 10:44:06.599033] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:21.846 [2024-07-14 10:44:06.634900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:21.846 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:21.846 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:35:21.846 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:21.846 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:22.159 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:22.160 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.160 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:22.160 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.160 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:22.160 10:44:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:22.419 nvme0n1 00:35:22.419 10:44:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:22.419 10:44:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.419 10:44:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:22.419 10:44:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.419 10:44:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:22.419 10:44:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:22.419 Running I/O for 2 seconds... 00:35:22.419 [2024-07-14 10:44:07.270275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.419 [2024-07-14 10:44:07.270460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.419 [2024-07-14 10:44:07.270489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.419 [2024-07-14 10:44:07.279764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.419 [2024-07-14 10:44:07.279933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.419 [2024-07-14 10:44:07.279962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.419 [2024-07-14 10:44:07.289312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.419 [2024-07-14 10:44:07.289485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.419 [2024-07-14 10:44:07.289508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.419 [2024-07-14 10:44:07.298841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.419 [2024-07-14 10:44:07.298998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.419 [2024-07-14 10:44:07.299019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.419 [2024-07-14 10:44:07.308442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.419 [2024-07-14 10:44:07.308603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.419 [2024-07-14 10:44:07.308625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.419 [2024-07-14 10:44:07.317968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.419 [2024-07-14 10:44:07.318125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.419 [2024-07-14 10:44:07.318145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.419 [2024-07-14 10:44:07.327444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.419 [2024-07-14 10:44:07.327599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.419 [2024-07-14 10:44:07.327618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.419 [2024-07-14 10:44:07.336905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.419 [2024-07-14 10:44:07.337062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.419 [2024-07-14 10:44:07.337082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.419 [2024-07-14 10:44:07.346422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.419 [2024-07-14 10:44:07.346577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.419 [2024-07-14 10:44:07.346595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.419 [2024-07-14 10:44:07.355881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.419 [2024-07-14 10:44:07.356037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.419 [2024-07-14 10:44:07.356055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.419 [2024-07-14 10:44:07.365374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.419 [2024-07-14 10:44:07.365531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.419 [2024-07-14 10:44:07.365551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.419 [2024-07-14 10:44:07.374850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.419 [2024-07-14 10:44:07.375004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.419 [2024-07-14 10:44:07.375023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.419 [2024-07-14 10:44:07.384293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.419 [2024-07-14 10:44:07.384448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.419 [2024-07-14 10:44:07.384466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.419 [2024-07-14 10:44:07.393792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.419 [2024-07-14 10:44:07.393943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.419 [2024-07-14 10:44:07.393961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.679 [2024-07-14 10:44:07.403235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.679 [2024-07-14 10:44:07.403394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.679 [2024-07-14 10:44:07.403414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.679 [2024-07-14 10:44:07.412678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.679 [2024-07-14 10:44:07.412831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.679 [2024-07-14 10:44:07.412848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.679 [2024-07-14 10:44:07.422193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.679 [2024-07-14 10:44:07.422355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.679 [2024-07-14 10:44:07.422374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.679 [2024-07-14 10:44:07.431637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.679 [2024-07-14 10:44:07.431791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.679 [2024-07-14 10:44:07.431810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.679 [2024-07-14 10:44:07.441094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.679 [2024-07-14 10:44:07.441249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.679 [2024-07-14 10:44:07.441268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.679 [2024-07-14 10:44:07.450543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.679 [2024-07-14 10:44:07.450695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.679 [2024-07-14 10:44:07.450713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.679 [2024-07-14 10:44:07.459982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.679 [2024-07-14 10:44:07.460136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.679 [2024-07-14 10:44:07.460153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.679 [2024-07-14 10:44:07.469465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.679 [2024-07-14 10:44:07.469622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.679 [2024-07-14 10:44:07.469639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.679 [2024-07-14 10:44:07.478927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.679 [2024-07-14 10:44:07.479080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.679 [2024-07-14 10:44:07.479098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.679 [2024-07-14 10:44:07.488377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.679 [2024-07-14 10:44:07.488533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.679 [2024-07-14 10:44:07.488551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.679 [2024-07-14 10:44:07.497873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.679 [2024-07-14 10:44:07.498026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.679 [2024-07-14 10:44:07.498044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.679 [2024-07-14 10:44:07.507317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.679 [2024-07-14 10:44:07.507470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.679 [2024-07-14 10:44:07.507488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.679 [2024-07-14 10:44:07.516777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.679 [2024-07-14 10:44:07.516932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.679 [2024-07-14 10:44:07.516949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.679 [2024-07-14 10:44:07.526275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.679 [2024-07-14 10:44:07.526431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.679 [2024-07-14 10:44:07.526449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.679 [2024-07-14 10:44:07.535867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.679 [2024-07-14 10:44:07.536020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.679 [2024-07-14 10:44:07.536038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.679 [2024-07-14 10:44:07.545351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.679 [2024-07-14 10:44:07.545507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.679 [2024-07-14 10:44:07.545527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.679 [2024-07-14 10:44:07.554809] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.679 [2024-07-14 10:44:07.554961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.679 [2024-07-14 10:44:07.554978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.679 [2024-07-14 10:44:07.564239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.679 [2024-07-14 10:44:07.564395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.679 [2024-07-14 10:44:07.564413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.679 [2024-07-14 10:44:07.573704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.679 [2024-07-14 10:44:07.573857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.679 [2024-07-14 10:44:07.573875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.679 [2024-07-14 10:44:07.583146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.679 [2024-07-14 10:44:07.583305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.679 [2024-07-14 10:44:07.583324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.679 [2024-07-14 10:44:07.592628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.679 [2024-07-14 10:44:07.592784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.679 [2024-07-14 10:44:07.592801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.679 [2024-07-14 10:44:07.602106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.679 [2024-07-14 10:44:07.602266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.679 [2024-07-14 10:44:07.602283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.679 [2024-07-14 10:44:07.611526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.679 [2024-07-14 10:44:07.611682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.679 [2024-07-14 10:44:07.611702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.679 [2024-07-14 10:44:07.621025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.679 [2024-07-14 10:44:07.621180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.679 [2024-07-14 10:44:07.621198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.679 [2024-07-14 10:44:07.630492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.679 [2024-07-14 10:44:07.630643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.679 [2024-07-14 10:44:07.630660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.679 [2024-07-14 10:44:07.639923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.679 [2024-07-14 10:44:07.640078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.679 [2024-07-14 10:44:07.640097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.680 [2024-07-14 10:44:07.649396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.680 [2024-07-14 10:44:07.649552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.680 [2024-07-14 10:44:07.649569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.940 [2024-07-14 10:44:07.658894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.940 [2024-07-14 10:44:07.659051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.940 [2024-07-14 10:44:07.659071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.940 [2024-07-14 10:44:07.668435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.940 [2024-07-14 10:44:07.668589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.940 [2024-07-14 10:44:07.668606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.940 [2024-07-14 10:44:07.677898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.940 [2024-07-14 10:44:07.678054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.940 [2024-07-14 10:44:07.678071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.940 [2024-07-14 10:44:07.687340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.940 [2024-07-14 10:44:07.687495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.940 [2024-07-14 10:44:07.687512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.940 [2024-07-14 10:44:07.696816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.940 [2024-07-14 10:44:07.696977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.940 [2024-07-14 10:44:07.696994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.940 [2024-07-14 10:44:07.706290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.940 [2024-07-14 10:44:07.706445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.940 [2024-07-14 10:44:07.706462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.940 [2024-07-14 10:44:07.715726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.940 [2024-07-14 10:44:07.715882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.940 [2024-07-14 10:44:07.715899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.940 [2024-07-14 10:44:07.725222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.940 [2024-07-14 10:44:07.725381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.940 [2024-07-14 10:44:07.725399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.940 [2024-07-14 10:44:07.734733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.940 [2024-07-14 10:44:07.734889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.940 [2024-07-14 10:44:07.734907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.940 [2024-07-14 10:44:07.744185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.940 [2024-07-14 10:44:07.744347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.940 [2024-07-14 10:44:07.744364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.940 [2024-07-14 10:44:07.753826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.940 [2024-07-14 10:44:07.753981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.940 [2024-07-14 10:44:07.753998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.940 [2024-07-14 10:44:07.763260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.940 [2024-07-14 10:44:07.763414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.940 [2024-07-14 10:44:07.763431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.940 [2024-07-14 10:44:07.772729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.940 [2024-07-14 10:44:07.772882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.940 [2024-07-14 10:44:07.772899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.940 [2024-07-14 10:44:07.782231] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.940 [2024-07-14 10:44:07.782385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.940 [2024-07-14 10:44:07.782404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.940 [2024-07-14 10:44:07.791815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.940 [2024-07-14 10:44:07.791970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.940 [2024-07-14 10:44:07.791987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.940 [2024-07-14 10:44:07.801282] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.940 [2024-07-14 10:44:07.801438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.940 [2024-07-14 10:44:07.801455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.940 [2024-07-14 10:44:07.810736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.940 [2024-07-14 10:44:07.810888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.940 [2024-07-14 10:44:07.810905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.940 [2024-07-14 10:44:07.820194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.940 [2024-07-14 10:44:07.820353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.940 [2024-07-14 10:44:07.820370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.940 [2024-07-14 10:44:07.829667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.940 [2024-07-14 10:44:07.829820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.940 [2024-07-14 10:44:07.829837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.940 [2024-07-14 10:44:07.839100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.940 [2024-07-14 10:44:07.839252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.940 [2024-07-14 10:44:07.839270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.940 [2024-07-14 10:44:07.848563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.940 [2024-07-14 10:44:07.848717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.940 [2024-07-14 10:44:07.848734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.940 [2024-07-14 10:44:07.858019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.940 [2024-07-14 10:44:07.858172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.940 [2024-07-14 10:44:07.858192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.940 [2024-07-14 10:44:07.867453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.940 [2024-07-14 10:44:07.867605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.941 [2024-07-14 10:44:07.867623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.941 [2024-07-14 10:44:07.876932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.941 [2024-07-14 10:44:07.877087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.941 [2024-07-14 10:44:07.877104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.941 [2024-07-14 10:44:07.886365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.941 [2024-07-14 10:44:07.886519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.941 [2024-07-14 10:44:07.886536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.941 [2024-07-14 10:44:07.895805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.941 [2024-07-14 10:44:07.895958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.941 [2024-07-14 10:44:07.895976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.941 [2024-07-14 10:44:07.905316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.941 [2024-07-14 10:44:07.905468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.941 [2024-07-14 10:44:07.905485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.941 [2024-07-14 10:44:07.914755] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:22.941 [2024-07-14 10:44:07.914909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.941 [2024-07-14 10:44:07.914926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.200 [2024-07-14 10:44:07.924295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.200 [2024-07-14 10:44:07.924449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.200 [2024-07-14 10:44:07.924469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.200 [2024-07-14 10:44:07.933806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.200 [2024-07-14 10:44:07.933961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.200 [2024-07-14 10:44:07.933978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.200 [2024-07-14 10:44:07.943233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.200 [2024-07-14 10:44:07.943393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.201 [2024-07-14 10:44:07.943411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.201 [2024-07-14 10:44:07.952725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.201 [2024-07-14 10:44:07.952876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.201 [2024-07-14 10:44:07.952894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.201 [2024-07-14 10:44:07.962156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.201 [2024-07-14 10:44:07.962317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.201 [2024-07-14 10:44:07.962335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.201 [2024-07-14 10:44:07.971659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.201 [2024-07-14 10:44:07.971814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.201 [2024-07-14 10:44:07.971830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.201 [2024-07-14 10:44:07.981140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.201 [2024-07-14 10:44:07.981304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.201 [2024-07-14 10:44:07.981321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.201 [2024-07-14 10:44:07.990596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.201 [2024-07-14 10:44:07.990753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.201 [2024-07-14 10:44:07.990770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.201 [2024-07-14 10:44:08.000039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.201 [2024-07-14 10:44:08.000192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.201 [2024-07-14 10:44:08.000209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.201 [2024-07-14 10:44:08.009514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.201 [2024-07-14 10:44:08.009667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.201 [2024-07-14 10:44:08.009685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.201 [2024-07-14 10:44:08.018925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.201 [2024-07-14 10:44:08.019079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.201 [2024-07-14 10:44:08.019096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.201 [2024-07-14 10:44:08.028439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.201 [2024-07-14 10:44:08.028594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.201 [2024-07-14 10:44:08.028611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.201 [2024-07-14 10:44:08.037851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.201 [2024-07-14 10:44:08.038003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.201 [2024-07-14 10:44:08.038021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.201 [2024-07-14 10:44:08.047451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.201 [2024-07-14 10:44:08.047613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.201 [2024-07-14 10:44:08.047629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.201 [2024-07-14 10:44:08.056918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.201 [2024-07-14 10:44:08.057073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.201 [2024-07-14 10:44:08.057090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.201 [2024-07-14 10:44:08.066377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.201 [2024-07-14 10:44:08.066531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.201 [2024-07-14 10:44:08.066548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.201 [2024-07-14 10:44:08.075848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.201 [2024-07-14 10:44:08.076003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.201 [2024-07-14 10:44:08.076020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.201 [2024-07-14 10:44:08.085322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.201 [2024-07-14 10:44:08.085476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.201 [2024-07-14 10:44:08.085494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.201 [2024-07-14 10:44:08.094764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.201 [2024-07-14 10:44:08.094916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.201 [2024-07-14 10:44:08.094934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.201 [2024-07-14 10:44:08.104319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.201 [2024-07-14 10:44:08.104472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.201 [2024-07-14 10:44:08.104490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.201 [2024-07-14 10:44:08.113761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.201 [2024-07-14 10:44:08.113914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.201 [2024-07-14 10:44:08.113931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.201 [2024-07-14 10:44:08.123206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.201 [2024-07-14 10:44:08.123370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.201 [2024-07-14 10:44:08.123387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.201 [2024-07-14 10:44:08.132697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.201 [2024-07-14 10:44:08.132851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.201 [2024-07-14 10:44:08.132868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.201 [2024-07-14 10:44:08.142152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.201 [2024-07-14 10:44:08.142316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.201 [2024-07-14 10:44:08.142334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.201 [2024-07-14 10:44:08.151654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.201 [2024-07-14 10:44:08.151809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.201 [2024-07-14 10:44:08.151827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.201 [2024-07-14 10:44:08.161127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.201 [2024-07-14 10:44:08.161290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.201 [2024-07-14 10:44:08.161307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.201 [2024-07-14 10:44:08.170573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.201 [2024-07-14 10:44:08.170726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.201 [2024-07-14 10:44:08.170743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.461 [2024-07-14 10:44:08.180063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.461 [2024-07-14 10:44:08.180238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.461 [2024-07-14 10:44:08.180258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.461 [2024-07-14 10:44:08.189680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.461 [2024-07-14 10:44:08.189830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.461 [2024-07-14 10:44:08.189850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.461 [2024-07-14 10:44:08.199122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.461 [2024-07-14 10:44:08.199285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.461 [2024-07-14 10:44:08.199304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.462 [2024-07-14 10:44:08.208618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.462 [2024-07-14 10:44:08.208772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.462 [2024-07-14 10:44:08.208789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.462 [2024-07-14 10:44:08.218065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.462 [2024-07-14 10:44:08.218218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.462 [2024-07-14 10:44:08.218239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.462 [2024-07-14 10:44:08.227539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.462 [2024-07-14 10:44:08.227693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.462 [2024-07-14 10:44:08.227710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.462 [2024-07-14 10:44:08.237002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.462 [2024-07-14 10:44:08.237154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.462 [2024-07-14 10:44:08.237172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.462 [2024-07-14 10:44:08.246439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.462 [2024-07-14 10:44:08.246593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.462 [2024-07-14 10:44:08.246610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.462 [2024-07-14 10:44:08.256151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.462 [2024-07-14 10:44:08.256314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.462 [2024-07-14 10:44:08.256332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.462 [2024-07-14 10:44:08.265594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.462 [2024-07-14 10:44:08.265750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.462 [2024-07-14 10:44:08.265768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.462 [2024-07-14 10:44:08.275019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.462 [2024-07-14 10:44:08.275177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.462 [2024-07-14 10:44:08.275195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.462 [2024-07-14 10:44:08.284497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.462 [2024-07-14 10:44:08.284650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.462 [2024-07-14 10:44:08.284672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.462 [2024-07-14 10:44:08.293940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.462 [2024-07-14 10:44:08.294095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.462 [2024-07-14 10:44:08.294113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.462 [2024-07-14 10:44:08.303556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.462 [2024-07-14 10:44:08.303713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.462 [2024-07-14 10:44:08.303731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.462 [2024-07-14 10:44:08.313023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.462 [2024-07-14 10:44:08.313176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.462 [2024-07-14 10:44:08.313194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.462 [2024-07-14 10:44:08.322469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.462 [2024-07-14 10:44:08.322626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.462 [2024-07-14 10:44:08.322644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.462 [2024-07-14 10:44:08.331947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.462 [2024-07-14 10:44:08.332101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.462 [2024-07-14 10:44:08.332119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.462 [2024-07-14 10:44:08.341400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.462 [2024-07-14 10:44:08.341555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.462 [2024-07-14 10:44:08.341573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.462 [2024-07-14 10:44:08.350849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.462 [2024-07-14 10:44:08.351004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.462 [2024-07-14 10:44:08.351022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.462 [2024-07-14 10:44:08.360398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.462 [2024-07-14 10:44:08.360553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.462 [2024-07-14 10:44:08.360572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.462 [2024-07-14 10:44:08.369831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.462 [2024-07-14 10:44:08.369986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.462 [2024-07-14 10:44:08.370003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.462 [2024-07-14 10:44:08.379344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.462 [2024-07-14 10:44:08.379505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.462 [2024-07-14 10:44:08.379522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.462 [2024-07-14 10:44:08.389136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.462 [2024-07-14 10:44:08.389299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.462 [2024-07-14 10:44:08.389318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.462 [2024-07-14 10:44:08.398716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.462 [2024-07-14 10:44:08.398869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.462 [2024-07-14 10:44:08.398886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.462 [2024-07-14 10:44:08.408186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.462 [2024-07-14 10:44:08.408346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.462 [2024-07-14 10:44:08.408363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.462 [2024-07-14 10:44:08.417764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.462 [2024-07-14 10:44:08.417919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.462 [2024-07-14 10:44:08.417936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.462 [2024-07-14 10:44:08.427200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.462 [2024-07-14 10:44:08.427361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.462 [2024-07-14 10:44:08.427379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.462 [2024-07-14 10:44:08.436743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.462 [2024-07-14 10:44:08.436926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.462 [2024-07-14 10:44:08.436949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.722 [2024-07-14 10:44:08.446429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.722 [2024-07-14 10:44:08.446583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.722 [2024-07-14 10:44:08.446601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.722 [2024-07-14 10:44:08.455897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.722 [2024-07-14 10:44:08.456051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.722 [2024-07-14 10:44:08.456069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.722 [2024-07-14 10:44:08.465393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.722 [2024-07-14 10:44:08.465546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.723 [2024-07-14 10:44:08.465564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.723 [2024-07-14 10:44:08.474848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.723 [2024-07-14 10:44:08.475005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.723 [2024-07-14 10:44:08.475023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.723 [2024-07-14 10:44:08.484334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.723 [2024-07-14 10:44:08.484502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.723 [2024-07-14 10:44:08.484519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.723 [2024-07-14 10:44:08.493806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.723 [2024-07-14 10:44:08.493962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.723 [2024-07-14 10:44:08.493980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.723 [2024-07-14 10:44:08.503250] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.723 [2024-07-14 10:44:08.503405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.723 [2024-07-14 10:44:08.503424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.723 [2024-07-14 10:44:08.512756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.723 [2024-07-14 10:44:08.512913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.723 [2024-07-14 10:44:08.512930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.723 [2024-07-14 10:44:08.522190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.723 [2024-07-14 10:44:08.522355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.723 [2024-07-14 10:44:08.522373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.723 [2024-07-14 10:44:08.531651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.723 [2024-07-14 10:44:08.531805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.723 [2024-07-14 10:44:08.531823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.723 [2024-07-14 10:44:08.541122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.723 [2024-07-14 10:44:08.541281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.723 [2024-07-14 10:44:08.541299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.723 [2024-07-14 10:44:08.550636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.723 [2024-07-14 10:44:08.550789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.723 [2024-07-14 10:44:08.550808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.723 [2024-07-14 10:44:08.560276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.723 [2024-07-14 10:44:08.560434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.723 [2024-07-14 10:44:08.560451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.723 [2024-07-14 10:44:08.569747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.723 [2024-07-14 10:44:08.569899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.723 [2024-07-14 10:44:08.569917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.723 [2024-07-14 10:44:08.579174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.723 [2024-07-14 10:44:08.579335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.723 [2024-07-14 10:44:08.579352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.723 [2024-07-14 10:44:08.588683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.723 [2024-07-14 10:44:08.588835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.723 [2024-07-14 10:44:08.588852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.723 [2024-07-14 10:44:08.598133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.723 [2024-07-14 10:44:08.598295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.723 [2024-07-14 10:44:08.598313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.723 [2024-07-14 10:44:08.607603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.723 [2024-07-14 10:44:08.607758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.723 [2024-07-14 10:44:08.607776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.723 [2024-07-14 10:44:08.617081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.723 [2024-07-14 10:44:08.617238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.723 [2024-07-14 10:44:08.617256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.723 [2024-07-14 10:44:08.626533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.723 [2024-07-14 10:44:08.626686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.723 [2024-07-14 10:44:08.626703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.723 [2024-07-14 10:44:08.636008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.723 [2024-07-14 10:44:08.636162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.723 [2024-07-14 10:44:08.636180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.723 [2024-07-14 10:44:08.645462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.723 [2024-07-14 10:44:08.645614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.723 [2024-07-14 10:44:08.645632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.723 [2024-07-14 10:44:08.654919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.723 [2024-07-14 10:44:08.655071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.723 [2024-07-14 10:44:08.655089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.723 [2024-07-14 10:44:08.664424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.723 [2024-07-14 10:44:08.664581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.723 [2024-07-14 10:44:08.664598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.723 [2024-07-14 10:44:08.673852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.723 [2024-07-14 10:44:08.674005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.723 [2024-07-14 10:44:08.674023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.723 [2024-07-14 10:44:08.683315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.723 [2024-07-14 10:44:08.683470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.723 [2024-07-14 10:44:08.683491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.723 [2024-07-14 10:44:08.692799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.723 [2024-07-14 10:44:08.692955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.723 [2024-07-14 10:44:08.692973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.983 [2024-07-14 10:44:08.702271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.983 [2024-07-14 10:44:08.702430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.983 [2024-07-14 10:44:08.702450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.983 [2024-07-14 10:44:08.711797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.983 [2024-07-14 10:44:08.711952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.983 [2024-07-14 10:44:08.711969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.983 [2024-07-14 10:44:08.721272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.983 [2024-07-14 10:44:08.721424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.983 [2024-07-14 10:44:08.721442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.983 [2024-07-14 10:44:08.730711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.983 [2024-07-14 10:44:08.730866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.983 [2024-07-14 10:44:08.730883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.983 [2024-07-14 10:44:08.740211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.983 [2024-07-14 10:44:08.740372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.983 [2024-07-14 10:44:08.740390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.983 [2024-07-14 10:44:08.749678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.983 [2024-07-14 10:44:08.749829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.983 [2024-07-14 10:44:08.749847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.983 [2024-07-14 10:44:08.759220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.983 [2024-07-14 10:44:08.759382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.983 [2024-07-14 10:44:08.759399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.983 [2024-07-14 10:44:08.768711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.983 [2024-07-14 10:44:08.768865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.983 [2024-07-14 10:44:08.768889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.983 [2024-07-14 10:44:08.778155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.983 [2024-07-14 10:44:08.778315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.983 [2024-07-14 10:44:08.778332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.983 [2024-07-14 10:44:08.787652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.983 [2024-07-14 10:44:08.787807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.983 [2024-07-14 10:44:08.787824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.983 [2024-07-14 10:44:08.797106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.983 [2024-07-14 10:44:08.797266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.983 [2024-07-14 10:44:08.797283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.983 [2024-07-14 10:44:08.806726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.984 [2024-07-14 10:44:08.806881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.984 [2024-07-14 10:44:08.806898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.984 [2024-07-14 10:44:08.816364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.984 [2024-07-14 10:44:08.816520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.984 [2024-07-14 10:44:08.816537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.984 [2024-07-14 10:44:08.825801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.984 [2024-07-14 10:44:08.825956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.984 [2024-07-14 10:44:08.825973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.984 [2024-07-14 10:44:08.835205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.984 [2024-07-14 10:44:08.835367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.984 [2024-07-14 10:44:08.835385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.984 [2024-07-14 10:44:08.844708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.984 [2024-07-14 10:44:08.844862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.984 [2024-07-14 10:44:08.844880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.984 [2024-07-14 10:44:08.854221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.984 [2024-07-14 10:44:08.854386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.984 [2024-07-14 10:44:08.854403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.984 [2024-07-14 10:44:08.863692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.984 [2024-07-14 10:44:08.863845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.984 [2024-07-14 10:44:08.863862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.984 [2024-07-14 10:44:08.873131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.984 [2024-07-14 10:44:08.873291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.984 [2024-07-14 10:44:08.873309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.984 [2024-07-14 10:44:08.882602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.984 [2024-07-14 10:44:08.882756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.984 [2024-07-14 10:44:08.882773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.984 [2024-07-14 10:44:08.892082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.984 [2024-07-14 10:44:08.892238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.984 [2024-07-14 10:44:08.892256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.984 [2024-07-14 10:44:08.901527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.984 [2024-07-14 10:44:08.901680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.984 [2024-07-14 10:44:08.901698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.984 [2024-07-14 10:44:08.910959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.984 [2024-07-14 10:44:08.911112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.984 [2024-07-14 10:44:08.911129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.984 [2024-07-14 10:44:08.920446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.984 [2024-07-14 10:44:08.920602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.984 [2024-07-14 10:44:08.920619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.984 [2024-07-14 10:44:08.929876] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.984 [2024-07-14 10:44:08.930030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.984 [2024-07-14 10:44:08.930047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.984 [2024-07-14 10:44:08.939393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.984 [2024-07-14 10:44:08.939545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.984 [2024-07-14 10:44:08.939563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.984 [2024-07-14 10:44:08.948859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.984 [2024-07-14 10:44:08.949014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.984 [2024-07-14 10:44:08.949031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:23.984 [2024-07-14 10:44:08.958289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:23.984 [2024-07-14 10:44:08.958446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.984 [2024-07-14 10:44:08.958464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.243 [2024-07-14 10:44:08.967888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.243 [2024-07-14 10:44:08.968042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.243 [2024-07-14 10:44:08.968060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.243 [2024-07-14 10:44:08.977337] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.243 [2024-07-14 10:44:08.977495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.243 [2024-07-14 10:44:08.977512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.243 [2024-07-14 10:44:08.986773] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.243 [2024-07-14 10:44:08.986926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.243 [2024-07-14 10:44:08.986944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.243 [2024-07-14 10:44:08.996255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.243 [2024-07-14 10:44:08.996413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.243 [2024-07-14 10:44:08.996431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.243 [2024-07-14 10:44:09.005700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.243 [2024-07-14 10:44:09.005855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.243 [2024-07-14 10:44:09.005872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.243 [2024-07-14 10:44:09.015163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.243 [2024-07-14 10:44:09.015324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.243 [2024-07-14 10:44:09.015344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.243 [2024-07-14 10:44:09.024624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.243 [2024-07-14 10:44:09.024777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.243 [2024-07-14 10:44:09.024794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.243 [2024-07-14 10:44:09.034046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.243 [2024-07-14 10:44:09.034198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.243 [2024-07-14 10:44:09.034215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.243 [2024-07-14 10:44:09.043525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.243 [2024-07-14 10:44:09.043679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.243 [2024-07-14 10:44:09.043696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.243 [2024-07-14 10:44:09.052970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.243 [2024-07-14 10:44:09.053122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.243 [2024-07-14 10:44:09.053139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.243 [2024-07-14 10:44:09.062615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.243 [2024-07-14 10:44:09.062770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.243 [2024-07-14 10:44:09.062788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.243 [2024-07-14 10:44:09.072239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.243 [2024-07-14 10:44:09.072393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.243 [2024-07-14 10:44:09.072410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.243 [2024-07-14 10:44:09.081694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.243 [2024-07-14 10:44:09.081849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.243 [2024-07-14 10:44:09.081866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.243 [2024-07-14 10:44:09.091146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.243 [2024-07-14 10:44:09.091306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.243 [2024-07-14 10:44:09.091324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.243 [2024-07-14 10:44:09.100614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.243 [2024-07-14 10:44:09.100771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.243 [2024-07-14 10:44:09.100788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.243 [2024-07-14 10:44:09.110064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.243 [2024-07-14 10:44:09.110219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.243 [2024-07-14 10:44:09.110241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.243 [2024-07-14 10:44:09.119537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.243 [2024-07-14 10:44:09.119688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.243 [2024-07-14 10:44:09.119705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.243 [2024-07-14 10:44:09.128963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.243 [2024-07-14 10:44:09.129115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.243 [2024-07-14 10:44:09.129132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.243 [2024-07-14 10:44:09.138436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.243 [2024-07-14 10:44:09.138590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.243 [2024-07-14 10:44:09.138608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.243 [2024-07-14 10:44:09.147918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.243 [2024-07-14 10:44:09.148070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.243 [2024-07-14 10:44:09.148087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.243 [2024-07-14 10:44:09.157365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.243 [2024-07-14 10:44:09.157521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.243 [2024-07-14 10:44:09.157538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.243 [2024-07-14 10:44:09.166843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.244 [2024-07-14 10:44:09.166997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.244 [2024-07-14 10:44:09.167015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.244 [2024-07-14 10:44:09.176340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.244 [2024-07-14 10:44:09.176493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.244 [2024-07-14 10:44:09.176512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.244 [2024-07-14 10:44:09.185795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.244 [2024-07-14 10:44:09.185953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.244 [2024-07-14 10:44:09.185971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.244 [2024-07-14 10:44:09.195299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.244 [2024-07-14 10:44:09.195453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.244 [2024-07-14 10:44:09.195471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.244 [2024-07-14 10:44:09.204751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.244 [2024-07-14 10:44:09.204904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.244 [2024-07-14 10:44:09.204921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.244 [2024-07-14 10:44:09.214218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.244 [2024-07-14 10:44:09.214382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.244 [2024-07-14 10:44:09.214399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.502 [2024-07-14 10:44:09.223761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.502 [2024-07-14 10:44:09.223919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.502 [2024-07-14 10:44:09.223936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.502 [2024-07-14 10:44:09.233235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.502 [2024-07-14 10:44:09.233389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.502 [2024-07-14 10:44:09.233406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.502 [2024-07-14 10:44:09.242689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.502 [2024-07-14 10:44:09.242841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.502 [2024-07-14 10:44:09.242859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.502 [2024-07-14 10:44:09.252408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1647ce0) with pdu=0x2000190fda78 00:35:24.502 [2024-07-14 10:44:09.252565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.502 [2024-07-14 10:44:09.252582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:24.502 00:35:24.502 Latency(us) 00:35:24.502 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:24.502 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:24.502 nvme0n1 : 2.00 26824.41 104.78 0.00 0.00 4763.41 4416.56 14702.86 00:35:24.502 =================================================================================================================== 00:35:24.502 Total : 26824.41 104.78 0.00 0.00 4763.41 4416.56 14702.86 00:35:24.502 0 00:35:24.502 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:24.502 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:24.502 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:24.502 | .driver_specific 00:35:24.502 | .nvme_error 00:35:24.502 | .status_code 00:35:24.502 | .command_transient_transport_error' 00:35:24.502 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:24.502 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 210 > 0 )) 00:35:24.502 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2618308 00:35:24.502 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2618308 ']' 00:35:24.502 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2618308 00:35:24.502 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:35:24.502 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:24.502 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2618308 00:35:24.761 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:24.761 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:24.761 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2618308' 00:35:24.761 killing process with pid 2618308 00:35:24.761 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2618308 00:35:24.761 Received shutdown signal, test time was about 2.000000 seconds 00:35:24.761 00:35:24.761 Latency(us) 00:35:24.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:24.761 =================================================================================================================== 00:35:24.761 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:24.761 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2618308 00:35:24.761 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:24.761 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:24.761 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:24.761 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:24.761 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:24.761 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2618784 00:35:24.761 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2618784 /var/tmp/bperf.sock 00:35:24.761 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:24.761 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2618784 ']' 00:35:24.761 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:24.761 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:24.761 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:24.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:24.761 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:24.761 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:24.761 [2024-07-14 10:44:09.725185] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:24.761 [2024-07-14 10:44:09.725240] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2618784 ] 00:35:24.761 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:24.761 Zero copy mechanism will not be used. 00:35:25.019 EAL: No free 2048 kB hugepages reported on node 1 00:35:25.020 [2024-07-14 10:44:09.793294] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:25.020 [2024-07-14 10:44:09.832305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:25.020 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:25.020 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:35:25.020 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:25.020 10:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:25.278 10:44:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:25.278 10:44:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.278 10:44:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:25.278 10:44:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.278 10:44:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:25.278 10:44:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:25.536 nvme0n1 00:35:25.536 10:44:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:25.536 10:44:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.536 10:44:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:25.536 10:44:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.536 10:44:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:25.536 10:44:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:25.794 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:25.794 Zero copy mechanism will not be used. 00:35:25.794 Running I/O for 2 seconds... 00:35:25.795 [2024-07-14 10:44:10.591528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:25.795 [2024-07-14 10:44:10.591913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.795 [2024-07-14 10:44:10.591943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.795 [2024-07-14 10:44:10.598687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:25.795 [2024-07-14 10:44:10.599067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.795 [2024-07-14 10:44:10.599096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.795 [2024-07-14 10:44:10.606093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:25.795 [2024-07-14 10:44:10.606476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.795 [2024-07-14 10:44:10.606498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.795 [2024-07-14 10:44:10.613666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:25.795 [2024-07-14 10:44:10.614048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.795 [2024-07-14 10:44:10.614069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.795 [2024-07-14 10:44:10.621115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:25.795 [2024-07-14 10:44:10.621494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.795 [2024-07-14 10:44:10.621515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.795 [2024-07-14 10:44:10.628466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:25.795 [2024-07-14 10:44:10.628856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.795 [2024-07-14 10:44:10.628878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.795 [2024-07-14 10:44:10.635729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:25.795 [2024-07-14 10:44:10.636112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.795 [2024-07-14 10:44:10.636133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.795 [2024-07-14 10:44:10.643150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:25.795 [2024-07-14 10:44:10.643554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.795 [2024-07-14 10:44:10.643575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.795 [2024-07-14 10:44:10.650847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:25.795 [2024-07-14 10:44:10.651249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.795 [2024-07-14 10:44:10.651270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.795 [2024-07-14 10:44:10.658759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:25.795 [2024-07-14 10:44:10.659148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.795 [2024-07-14 10:44:10.659168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.795 [2024-07-14 10:44:10.666399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:25.795 [2024-07-14 10:44:10.666781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.795 [2024-07-14 10:44:10.666800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.795 [2024-07-14 10:44:10.673029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:25.795 [2024-07-14 10:44:10.673421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.795 [2024-07-14 10:44:10.673441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.795 [2024-07-14 10:44:10.680329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:25.795 [2024-07-14 10:44:10.680695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.795 [2024-07-14 10:44:10.680715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.795 [2024-07-14 10:44:10.687649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:25.795 [2024-07-14 10:44:10.688028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.795 [2024-07-14 10:44:10.688048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.795 [2024-07-14 10:44:10.694746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:25.795 [2024-07-14 10:44:10.695118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.795 [2024-07-14 10:44:10.695138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.795 [2024-07-14 10:44:10.702429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:25.795 [2024-07-14 10:44:10.702798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.795 [2024-07-14 10:44:10.702817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.795 [2024-07-14 10:44:10.709549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:25.795 [2024-07-14 10:44:10.709929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.795 [2024-07-14 10:44:10.709949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.795 [2024-07-14 10:44:10.715377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:25.795 [2024-07-14 10:44:10.715762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.795 [2024-07-14 10:44:10.715782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.795 [2024-07-14 10:44:10.720477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:25.795 [2024-07-14 10:44:10.720858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.795 [2024-07-14 10:44:10.720878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.795 [2024-07-14 10:44:10.725431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:25.795 [2024-07-14 10:44:10.725816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.795 [2024-07-14 10:44:10.725836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.795 [2024-07-14 10:44:10.730391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:25.795 [2024-07-14 10:44:10.730769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.795 [2024-07-14 10:44:10.730788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.795 [2024-07-14 10:44:10.736365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:25.795 [2024-07-14 10:44:10.736740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.795 [2024-07-14 10:44:10.736760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.795 [2024-07-14 10:44:10.741421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:25.795 [2024-07-14 10:44:10.741817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.795 [2024-07-14 10:44:10.741837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.795 [2024-07-14 10:44:10.746674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:25.795 [2024-07-14 10:44:10.747072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.795 [2024-07-14 10:44:10.747091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.795 [2024-07-14 10:44:10.752284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:25.795 [2024-07-14 10:44:10.752665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.795 [2024-07-14 10:44:10.752684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.795 [2024-07-14 10:44:10.757677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:25.795 [2024-07-14 10:44:10.757749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.795 [2024-07-14 10:44:10.757767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.795 [2024-07-14 10:44:10.763942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:25.795 [2024-07-14 10:44:10.764325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.795 [2024-07-14 10:44:10.764346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.795 [2024-07-14 10:44:10.770554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:25.795 [2024-07-14 10:44:10.770615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.795 [2024-07-14 10:44:10.770637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.056 [2024-07-14 10:44:10.777276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.056 [2024-07-14 10:44:10.777660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.056 [2024-07-14 10:44:10.777681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.056 [2024-07-14 10:44:10.783912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.056 [2024-07-14 10:44:10.784062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.056 [2024-07-14 10:44:10.784080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.056 [2024-07-14 10:44:10.789783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.056 [2024-07-14 10:44:10.790006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.056 [2024-07-14 10:44:10.790026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.056 [2024-07-14 10:44:10.795371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.056 [2024-07-14 10:44:10.795725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.056 [2024-07-14 10:44:10.795745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.056 [2024-07-14 10:44:10.801564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.056 [2024-07-14 10:44:10.801924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.056 [2024-07-14 10:44:10.801944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.056 [2024-07-14 10:44:10.807510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.056 [2024-07-14 10:44:10.807869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.056 [2024-07-14 10:44:10.807888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.056 [2024-07-14 10:44:10.813356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.056 [2024-07-14 10:44:10.813703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.056 [2024-07-14 10:44:10.813723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.056 [2024-07-14 10:44:10.819577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.056 [2024-07-14 10:44:10.819941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.056 [2024-07-14 10:44:10.819960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.056 [2024-07-14 10:44:10.825636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.056 [2024-07-14 10:44:10.825992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.056 [2024-07-14 10:44:10.826012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.056 [2024-07-14 10:44:10.831637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.056 [2024-07-14 10:44:10.831991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.056 [2024-07-14 10:44:10.832012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.056 [2024-07-14 10:44:10.837300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.056 [2024-07-14 10:44:10.837660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.056 [2024-07-14 10:44:10.837681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.056 [2024-07-14 10:44:10.843284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.056 [2024-07-14 10:44:10.843646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.056 [2024-07-14 10:44:10.843666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.056 [2024-07-14 10:44:10.849598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.056 [2024-07-14 10:44:10.849972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.056 [2024-07-14 10:44:10.849992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.056 [2024-07-14 10:44:10.855817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.056 [2024-07-14 10:44:10.856163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.056 [2024-07-14 10:44:10.856182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.056 [2024-07-14 10:44:10.861798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.056 [2024-07-14 10:44:10.862160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.056 [2024-07-14 10:44:10.862180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.056 [2024-07-14 10:44:10.867731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.056 [2024-07-14 10:44:10.868092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.056 [2024-07-14 10:44:10.868111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.056 [2024-07-14 10:44:10.873843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.056 [2024-07-14 10:44:10.874217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.056 [2024-07-14 10:44:10.874245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.056 [2024-07-14 10:44:10.879497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.056 [2024-07-14 10:44:10.879855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.056 [2024-07-14 10:44:10.879874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.056 [2024-07-14 10:44:10.884826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.056 [2024-07-14 10:44:10.885188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.056 [2024-07-14 10:44:10.885207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.056 [2024-07-14 10:44:10.890236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.056 [2024-07-14 10:44:10.890638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.056 [2024-07-14 10:44:10.890658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.056 [2024-07-14 10:44:10.896945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.056 [2024-07-14 10:44:10.897392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.056 [2024-07-14 10:44:10.897411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.056 [2024-07-14 10:44:10.904146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.056 [2024-07-14 10:44:10.904546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.056 [2024-07-14 10:44:10.904565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.057 [2024-07-14 10:44:10.911048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.057 [2024-07-14 10:44:10.911444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.057 [2024-07-14 10:44:10.911464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.057 [2024-07-14 10:44:10.917525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.057 [2024-07-14 10:44:10.917922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.057 [2024-07-14 10:44:10.917940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.057 [2024-07-14 10:44:10.923557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.057 [2024-07-14 10:44:10.923853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.057 [2024-07-14 10:44:10.923872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.057 [2024-07-14 10:44:10.929235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.057 [2024-07-14 10:44:10.929514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.057 [2024-07-14 10:44:10.929534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.057 [2024-07-14 10:44:10.934008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.057 [2024-07-14 10:44:10.934274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.057 [2024-07-14 10:44:10.934294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.057 [2024-07-14 10:44:10.938301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.057 [2024-07-14 10:44:10.938548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.057 [2024-07-14 10:44:10.938567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.057 [2024-07-14 10:44:10.942285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.057 [2024-07-14 10:44:10.942517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.057 [2024-07-14 10:44:10.942536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.057 [2024-07-14 10:44:10.946453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.057 [2024-07-14 10:44:10.946676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.057 [2024-07-14 10:44:10.946696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.057 [2024-07-14 10:44:10.950680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.057 [2024-07-14 10:44:10.950913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.057 [2024-07-14 10:44:10.950932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.057 [2024-07-14 10:44:10.955002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.057 [2024-07-14 10:44:10.955229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.057 [2024-07-14 10:44:10.955247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.057 [2024-07-14 10:44:10.959252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.057 [2024-07-14 10:44:10.959469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.057 [2024-07-14 10:44:10.959488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.057 [2024-07-14 10:44:10.963518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.057 [2024-07-14 10:44:10.963747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.057 [2024-07-14 10:44:10.963766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.057 [2024-07-14 10:44:10.968019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.057 [2024-07-14 10:44:10.968242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.057 [2024-07-14 10:44:10.968260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.057 [2024-07-14 10:44:10.972217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.057 [2024-07-14 10:44:10.972449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.057 [2024-07-14 10:44:10.972468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.057 [2024-07-14 10:44:10.976741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.057 [2024-07-14 10:44:10.976959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.057 [2024-07-14 10:44:10.976977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.057 [2024-07-14 10:44:10.980636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.057 [2024-07-14 10:44:10.980856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.057 [2024-07-14 10:44:10.980876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.057 [2024-07-14 10:44:10.984387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.057 [2024-07-14 10:44:10.984636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.057 [2024-07-14 10:44:10.984655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.057 [2024-07-14 10:44:10.988127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.057 [2024-07-14 10:44:10.988356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.057 [2024-07-14 10:44:10.988375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.057 [2024-07-14 10:44:10.991878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.057 [2024-07-14 10:44:10.992107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.057 [2024-07-14 10:44:10.992126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.057 [2024-07-14 10:44:10.995581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.057 [2024-07-14 10:44:10.995796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.057 [2024-07-14 10:44:10.995816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.057 [2024-07-14 10:44:10.999251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.057 [2024-07-14 10:44:10.999476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.057 [2024-07-14 10:44:10.999499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.057 [2024-07-14 10:44:11.002948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.057 [2024-07-14 10:44:11.003176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.057 [2024-07-14 10:44:11.003196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.057 [2024-07-14 10:44:11.006687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.057 [2024-07-14 10:44:11.006919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.057 [2024-07-14 10:44:11.006937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.057 [2024-07-14 10:44:11.010418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.057 [2024-07-14 10:44:11.010640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.057 [2024-07-14 10:44:11.010660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.057 [2024-07-14 10:44:11.014134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.057 [2024-07-14 10:44:11.014363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.057 [2024-07-14 10:44:11.014383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.057 [2024-07-14 10:44:11.017853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.057 [2024-07-14 10:44:11.018077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.057 [2024-07-14 10:44:11.018095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.057 [2024-07-14 10:44:11.021538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.057 [2024-07-14 10:44:11.021767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.057 [2024-07-14 10:44:11.021787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.057 [2024-07-14 10:44:11.025172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.057 [2024-07-14 10:44:11.025397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.057 [2024-07-14 10:44:11.025416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.057 [2024-07-14 10:44:11.028880] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.057 [2024-07-14 10:44:11.029106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.057 [2024-07-14 10:44:11.029126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.058 [2024-07-14 10:44:11.032757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.058 [2024-07-14 10:44:11.032984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.058 [2024-07-14 10:44:11.033004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.319 [2024-07-14 10:44:11.037200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.319 [2024-07-14 10:44:11.037436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.319 [2024-07-14 10:44:11.037456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.319 [2024-07-14 10:44:11.042343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.319 [2024-07-14 10:44:11.042593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.319 [2024-07-14 10:44:11.042613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.319 [2024-07-14 10:44:11.047140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.319 [2024-07-14 10:44:11.047366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.319 [2024-07-14 10:44:11.047385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.319 [2024-07-14 10:44:11.051318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.319 [2024-07-14 10:44:11.051542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.319 [2024-07-14 10:44:11.051561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.319 [2024-07-14 10:44:11.055315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.319 [2024-07-14 10:44:11.055540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.319 [2024-07-14 10:44:11.055559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.319 [2024-07-14 10:44:11.059031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.319 [2024-07-14 10:44:11.059263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.319 [2024-07-14 10:44:11.059283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.319 [2024-07-14 10:44:11.062760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.319 [2024-07-14 10:44:11.062984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.319 [2024-07-14 10:44:11.063003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.319 [2024-07-14 10:44:11.066483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.319 [2024-07-14 10:44:11.066703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.319 [2024-07-14 10:44:11.066723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.319 [2024-07-14 10:44:11.070168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.319 [2024-07-14 10:44:11.070407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.319 [2024-07-14 10:44:11.070426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.319 [2024-07-14 10:44:11.073877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.319 [2024-07-14 10:44:11.074101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.319 [2024-07-14 10:44:11.074121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.319 [2024-07-14 10:44:11.077654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.319 [2024-07-14 10:44:11.077878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.319 [2024-07-14 10:44:11.077897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.319 [2024-07-14 10:44:11.081395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.319 [2024-07-14 10:44:11.081618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.319 [2024-07-14 10:44:11.081637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.319 [2024-07-14 10:44:11.085075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.319 [2024-07-14 10:44:11.085306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.319 [2024-07-14 10:44:11.085326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.319 [2024-07-14 10:44:11.088772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.319 [2024-07-14 10:44:11.089006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.319 [2024-07-14 10:44:11.089024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.319 [2024-07-14 10:44:11.092493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.319 [2024-07-14 10:44:11.092718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.319 [2024-07-14 10:44:11.092738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.319 [2024-07-14 10:44:11.096274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.319 [2024-07-14 10:44:11.096494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.319 [2024-07-14 10:44:11.096513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.319 [2024-07-14 10:44:11.100086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.319 [2024-07-14 10:44:11.100318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.319 [2024-07-14 10:44:11.100342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.319 [2024-07-14 10:44:11.103928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.319 [2024-07-14 10:44:11.104157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.319 [2024-07-14 10:44:11.104176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.319 [2024-07-14 10:44:11.107721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.319 [2024-07-14 10:44:11.107954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.319 [2024-07-14 10:44:11.107973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.319 [2024-07-14 10:44:11.111477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.319 [2024-07-14 10:44:11.111707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.319 [2024-07-14 10:44:11.111726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.319 [2024-07-14 10:44:11.115214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.319 [2024-07-14 10:44:11.115440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.319 [2024-07-14 10:44:11.115459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.319 [2024-07-14 10:44:11.118946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.320 [2024-07-14 10:44:11.119171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.320 [2024-07-14 10:44:11.119190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.320 [2024-07-14 10:44:11.122656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.320 [2024-07-14 10:44:11.122883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.320 [2024-07-14 10:44:11.122902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.320 [2024-07-14 10:44:11.126389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.320 [2024-07-14 10:44:11.126620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.320 [2024-07-14 10:44:11.126639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.320 [2024-07-14 10:44:11.130315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.320 [2024-07-14 10:44:11.130637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.320 [2024-07-14 10:44:11.130656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.320 [2024-07-14 10:44:11.134093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.320 [2024-07-14 10:44:11.134329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.320 [2024-07-14 10:44:11.134348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.320 [2024-07-14 10:44:11.137861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.320 [2024-07-14 10:44:11.138084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.320 [2024-07-14 10:44:11.138104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.320 [2024-07-14 10:44:11.141595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.320 [2024-07-14 10:44:11.141816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.320 [2024-07-14 10:44:11.141835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.320 [2024-07-14 10:44:11.145340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.320 [2024-07-14 10:44:11.145550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.320 [2024-07-14 10:44:11.145569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.320 [2024-07-14 10:44:11.149561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.320 [2024-07-14 10:44:11.149801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.320 [2024-07-14 10:44:11.149820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.320 [2024-07-14 10:44:11.154398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.320 [2024-07-14 10:44:11.154628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.320 [2024-07-14 10:44:11.154647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.320 [2024-07-14 10:44:11.158948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.320 [2024-07-14 10:44:11.159177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.320 [2024-07-14 10:44:11.159195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.320 [2024-07-14 10:44:11.163240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.320 [2024-07-14 10:44:11.163472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.320 [2024-07-14 10:44:11.163491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.320 [2024-07-14 10:44:11.167397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.320 [2024-07-14 10:44:11.167631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.320 [2024-07-14 10:44:11.167654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.320 [2024-07-14 10:44:11.171211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.320 [2024-07-14 10:44:11.171444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.320 [2024-07-14 10:44:11.171463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.320 [2024-07-14 10:44:11.174989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.320 [2024-07-14 10:44:11.175219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.320 [2024-07-14 10:44:11.175243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.320 [2024-07-14 10:44:11.179202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.320 [2024-07-14 10:44:11.179421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.320 [2024-07-14 10:44:11.179440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.320 [2024-07-14 10:44:11.183148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.320 [2024-07-14 10:44:11.183387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.320 [2024-07-14 10:44:11.183406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.320 [2024-07-14 10:44:11.187027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.320 [2024-07-14 10:44:11.187254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.320 [2024-07-14 10:44:11.187273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.320 [2024-07-14 10:44:11.190836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.320 [2024-07-14 10:44:11.191053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.320 [2024-07-14 10:44:11.191072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.320 [2024-07-14 10:44:11.194888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.320 [2024-07-14 10:44:11.195119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.320 [2024-07-14 10:44:11.195139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.320 [2024-07-14 10:44:11.199003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.320 [2024-07-14 10:44:11.199283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.320 [2024-07-14 10:44:11.199301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.320 [2024-07-14 10:44:11.202838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.320 [2024-07-14 10:44:11.203067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.320 [2024-07-14 10:44:11.203086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.320 [2024-07-14 10:44:11.206640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.320 [2024-07-14 10:44:11.206860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.320 [2024-07-14 10:44:11.206879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.320 [2024-07-14 10:44:11.210400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.320 [2024-07-14 10:44:11.210634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.320 [2024-07-14 10:44:11.210652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.320 [2024-07-14 10:44:11.214170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.320 [2024-07-14 10:44:11.214393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.320 [2024-07-14 10:44:11.214412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.320 [2024-07-14 10:44:11.217949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.320 [2024-07-14 10:44:11.218175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.320 [2024-07-14 10:44:11.218194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.320 [2024-07-14 10:44:11.221854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.320 [2024-07-14 10:44:11.222085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.320 [2024-07-14 10:44:11.222105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.320 [2024-07-14 10:44:11.226313] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.320 [2024-07-14 10:44:11.226541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.320 [2024-07-14 10:44:11.226560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.320 [2024-07-14 10:44:11.231192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.320 [2024-07-14 10:44:11.231432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.320 [2024-07-14 10:44:11.231453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.321 [2024-07-14 10:44:11.235484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.321 [2024-07-14 10:44:11.235712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.321 [2024-07-14 10:44:11.235733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.321 [2024-07-14 10:44:11.239841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.321 [2024-07-14 10:44:11.240057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.321 [2024-07-14 10:44:11.240078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.321 [2024-07-14 10:44:11.244296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.321 [2024-07-14 10:44:11.244519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.321 [2024-07-14 10:44:11.244538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.321 [2024-07-14 10:44:11.248450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.321 [2024-07-14 10:44:11.248662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.321 [2024-07-14 10:44:11.248681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.321 [2024-07-14 10:44:11.252858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.321 [2024-07-14 10:44:11.253088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.321 [2024-07-14 10:44:11.253107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.321 [2024-07-14 10:44:11.257232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.321 [2024-07-14 10:44:11.257459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.321 [2024-07-14 10:44:11.257478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.321 [2024-07-14 10:44:11.261415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.321 [2024-07-14 10:44:11.261639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.321 [2024-07-14 10:44:11.261658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.321 [2024-07-14 10:44:11.265767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.321 [2024-07-14 10:44:11.265996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.321 [2024-07-14 10:44:11.266015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.321 [2024-07-14 10:44:11.270279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.321 [2024-07-14 10:44:11.270537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.321 [2024-07-14 10:44:11.270556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.321 [2024-07-14 10:44:11.274940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.321 [2024-07-14 10:44:11.275214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.321 [2024-07-14 10:44:11.275243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.321 [2024-07-14 10:44:11.280638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.321 [2024-07-14 10:44:11.280869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.321 [2024-07-14 10:44:11.280888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.321 [2024-07-14 10:44:11.285856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.321 [2024-07-14 10:44:11.286086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.321 [2024-07-14 10:44:11.286105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.321 [2024-07-14 10:44:11.290022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.321 [2024-07-14 10:44:11.290274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.321 [2024-07-14 10:44:11.290292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.321 [2024-07-14 10:44:11.294293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.321 [2024-07-14 10:44:11.294524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.321 [2024-07-14 10:44:11.294543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.582 [2024-07-14 10:44:11.298434] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.582 [2024-07-14 10:44:11.298658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.582 [2024-07-14 10:44:11.298677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.582 [2024-07-14 10:44:11.302616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.582 [2024-07-14 10:44:11.302838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.582 [2024-07-14 10:44:11.302858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.582 [2024-07-14 10:44:11.306741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.582 [2024-07-14 10:44:11.306959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.582 [2024-07-14 10:44:11.306978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.582 [2024-07-14 10:44:11.310868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.582 [2024-07-14 10:44:11.311092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.582 [2024-07-14 10:44:11.311112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.582 [2024-07-14 10:44:11.315009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.582 [2024-07-14 10:44:11.315221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.582 [2024-07-14 10:44:11.315245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.582 [2024-07-14 10:44:11.318960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.582 [2024-07-14 10:44:11.319192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.582 [2024-07-14 10:44:11.319211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.582 [2024-07-14 10:44:11.322713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.582 [2024-07-14 10:44:11.322936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.582 [2024-07-14 10:44:11.322955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.582 [2024-07-14 10:44:11.326470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.582 [2024-07-14 10:44:11.326701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.582 [2024-07-14 10:44:11.326721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.582 [2024-07-14 10:44:11.330189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.582 [2024-07-14 10:44:11.330416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.582 [2024-07-14 10:44:11.330436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.582 [2024-07-14 10:44:11.334411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.582 [2024-07-14 10:44:11.334633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.582 [2024-07-14 10:44:11.334652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.582 [2024-07-14 10:44:11.338269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.582 [2024-07-14 10:44:11.338495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.582 [2024-07-14 10:44:11.338515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.582 [2024-07-14 10:44:11.342000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.582 [2024-07-14 10:44:11.342234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.582 [2024-07-14 10:44:11.342254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.582 [2024-07-14 10:44:11.345761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.582 [2024-07-14 10:44:11.345992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.582 [2024-07-14 10:44:11.346011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.582 [2024-07-14 10:44:11.349471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.582 [2024-07-14 10:44:11.349703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.582 [2024-07-14 10:44:11.349721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.582 [2024-07-14 10:44:11.353284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.582 [2024-07-14 10:44:11.353500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.582 [2024-07-14 10:44:11.353519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.582 [2024-07-14 10:44:11.357114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.582 [2024-07-14 10:44:11.357344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.582 [2024-07-14 10:44:11.357364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.582 [2024-07-14 10:44:11.360958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.582 [2024-07-14 10:44:11.361183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.582 [2024-07-14 10:44:11.361202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.582 [2024-07-14 10:44:11.364670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.582 [2024-07-14 10:44:11.364895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.582 [2024-07-14 10:44:11.364915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.582 [2024-07-14 10:44:11.368376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.582 [2024-07-14 10:44:11.368598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.582 [2024-07-14 10:44:11.368618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.582 [2024-07-14 10:44:11.372051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.582 [2024-07-14 10:44:11.372283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.582 [2024-07-14 10:44:11.372302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.582 [2024-07-14 10:44:11.375842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.582 [2024-07-14 10:44:11.376065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.582 [2024-07-14 10:44:11.376083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.582 [2024-07-14 10:44:11.379893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.582 [2024-07-14 10:44:11.380119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.582 [2024-07-14 10:44:11.380142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.582 [2024-07-14 10:44:11.383637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.582 [2024-07-14 10:44:11.383858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.582 [2024-07-14 10:44:11.383878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.582 [2024-07-14 10:44:11.387399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.582 [2024-07-14 10:44:11.387633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.582 [2024-07-14 10:44:11.387653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.582 [2024-07-14 10:44:11.391108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.582 [2024-07-14 10:44:11.391336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.582 [2024-07-14 10:44:11.391355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.582 [2024-07-14 10:44:11.394863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.582 [2024-07-14 10:44:11.395090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.582 [2024-07-14 10:44:11.395109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.582 [2024-07-14 10:44:11.398598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.582 [2024-07-14 10:44:11.398821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.582 [2024-07-14 10:44:11.398840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.582 [2024-07-14 10:44:11.402365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.582 [2024-07-14 10:44:11.402586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.402606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.583 [2024-07-14 10:44:11.406338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.583 [2024-07-14 10:44:11.406561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.406580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.583 [2024-07-14 10:44:11.411762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.583 [2024-07-14 10:44:11.411981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.412001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.583 [2024-07-14 10:44:11.416501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.583 [2024-07-14 10:44:11.416746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.416765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.583 [2024-07-14 10:44:11.420762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.583 [2024-07-14 10:44:11.420992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.421012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.583 [2024-07-14 10:44:11.425039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.583 [2024-07-14 10:44:11.425267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.425287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.583 [2024-07-14 10:44:11.429187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.583 [2024-07-14 10:44:11.429403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.429421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.583 [2024-07-14 10:44:11.433549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.583 [2024-07-14 10:44:11.433779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.433798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.583 [2024-07-14 10:44:11.437431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.583 [2024-07-14 10:44:11.437657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.437676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.583 [2024-07-14 10:44:11.441163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.583 [2024-07-14 10:44:11.441395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.441414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.583 [2024-07-14 10:44:11.444890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.583 [2024-07-14 10:44:11.445113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.445132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.583 [2024-07-14 10:44:11.448594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.583 [2024-07-14 10:44:11.448809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.448828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.583 [2024-07-14 10:44:11.452338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.583 [2024-07-14 10:44:11.452573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.452593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.583 [2024-07-14 10:44:11.456195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.583 [2024-07-14 10:44:11.456419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.456438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.583 [2024-07-14 10:44:11.460659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.583 [2024-07-14 10:44:11.460870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.460890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.583 [2024-07-14 10:44:11.465503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.583 [2024-07-14 10:44:11.465728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.465749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.583 [2024-07-14 10:44:11.469772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.583 [2024-07-14 10:44:11.470002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.470022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.583 [2024-07-14 10:44:11.474052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.583 [2024-07-14 10:44:11.474296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.474315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.583 [2024-07-14 10:44:11.478430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.583 [2024-07-14 10:44:11.478654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.478673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.583 [2024-07-14 10:44:11.482722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.583 [2024-07-14 10:44:11.482969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.482988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.583 [2024-07-14 10:44:11.487149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.583 [2024-07-14 10:44:11.487370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.487393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.583 [2024-07-14 10:44:11.490988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.583 [2024-07-14 10:44:11.491200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.491218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.583 [2024-07-14 10:44:11.494787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.583 [2024-07-14 10:44:11.494993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.495010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.583 [2024-07-14 10:44:11.498611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.583 [2024-07-14 10:44:11.498817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.498836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.583 [2024-07-14 10:44:11.502349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.583 [2024-07-14 10:44:11.502557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.502576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.583 [2024-07-14 10:44:11.506100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.583 [2024-07-14 10:44:11.506306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.506324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.583 [2024-07-14 10:44:11.510295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.583 [2024-07-14 10:44:11.510493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.510510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.583 [2024-07-14 10:44:11.515321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.583 [2024-07-14 10:44:11.515521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.515539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.583 [2024-07-14 10:44:11.519725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.583 [2024-07-14 10:44:11.519916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.519933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.583 [2024-07-14 10:44:11.523838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.583 [2024-07-14 10:44:11.524037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.583 [2024-07-14 10:44:11.524054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.584 [2024-07-14 10:44:11.528262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.584 [2024-07-14 10:44:11.528464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.584 [2024-07-14 10:44:11.528482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.584 [2024-07-14 10:44:11.532487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.584 [2024-07-14 10:44:11.532714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.584 [2024-07-14 10:44:11.532733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.584 [2024-07-14 10:44:11.536388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.584 [2024-07-14 10:44:11.536598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.584 [2024-07-14 10:44:11.536619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.584 [2024-07-14 10:44:11.540112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.584 [2024-07-14 10:44:11.540334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.584 [2024-07-14 10:44:11.540353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.584 [2024-07-14 10:44:11.543876] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.584 [2024-07-14 10:44:11.544090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.584 [2024-07-14 10:44:11.544108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.584 [2024-07-14 10:44:11.547696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.584 [2024-07-14 10:44:11.547896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.584 [2024-07-14 10:44:11.547915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.584 [2024-07-14 10:44:11.551782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.584 [2024-07-14 10:44:11.551986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.584 [2024-07-14 10:44:11.552003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.584 [2024-07-14 10:44:11.556608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.584 [2024-07-14 10:44:11.556804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.584 [2024-07-14 10:44:11.556825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.844 [2024-07-14 10:44:11.561212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.844 [2024-07-14 10:44:11.561417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.844 [2024-07-14 10:44:11.561435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.844 [2024-07-14 10:44:11.565759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.844 [2024-07-14 10:44:11.566006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.844 [2024-07-14 10:44:11.566026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.844 [2024-07-14 10:44:11.570988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.844 [2024-07-14 10:44:11.571207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.844 [2024-07-14 10:44:11.571232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.844 [2024-07-14 10:44:11.575702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.844 [2024-07-14 10:44:11.575911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.844 [2024-07-14 10:44:11.575932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.844 [2024-07-14 10:44:11.579979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.844 [2024-07-14 10:44:11.580176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.844 [2024-07-14 10:44:11.580194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.844 [2024-07-14 10:44:11.584120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.844 [2024-07-14 10:44:11.584317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.844 [2024-07-14 10:44:11.584336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.844 [2024-07-14 10:44:11.588221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.844 [2024-07-14 10:44:11.588442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.845 [2024-07-14 10:44:11.588460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.845 [2024-07-14 10:44:11.592365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.845 [2024-07-14 10:44:11.592559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.845 [2024-07-14 10:44:11.592577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.845 [2024-07-14 10:44:11.596484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.845 [2024-07-14 10:44:11.596687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.845 [2024-07-14 10:44:11.596707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.845 [2024-07-14 10:44:11.600634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.845 [2024-07-14 10:44:11.600834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.845 [2024-07-14 10:44:11.600853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.845 [2024-07-14 10:44:11.604872] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.845 [2024-07-14 10:44:11.605068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.845 [2024-07-14 10:44:11.605086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.845 [2024-07-14 10:44:11.609386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.845 [2024-07-14 10:44:11.609586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.845 [2024-07-14 10:44:11.609604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.845 [2024-07-14 10:44:11.613868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.845 [2024-07-14 10:44:11.614073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.845 [2024-07-14 10:44:11.614092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.845 [2024-07-14 10:44:11.618140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.845 [2024-07-14 10:44:11.618352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.845 [2024-07-14 10:44:11.618372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.845 [2024-07-14 10:44:11.622674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.845 [2024-07-14 10:44:11.622893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.845 [2024-07-14 10:44:11.622912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.845 [2024-07-14 10:44:11.626821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.845 [2024-07-14 10:44:11.627043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.845 [2024-07-14 10:44:11.627061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.845 [2024-07-14 10:44:11.630986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.845 [2024-07-14 10:44:11.631202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.845 [2024-07-14 10:44:11.631221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.845 [2024-07-14 10:44:11.634984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.845 [2024-07-14 10:44:11.635196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.845 [2024-07-14 10:44:11.635216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.845 [2024-07-14 10:44:11.638826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.845 [2024-07-14 10:44:11.639051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.845 [2024-07-14 10:44:11.639071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.845 [2024-07-14 10:44:11.642733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.845 [2024-07-14 10:44:11.642947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.845 [2024-07-14 10:44:11.642967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.845 [2024-07-14 10:44:11.646616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.845 [2024-07-14 10:44:11.646820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.845 [2024-07-14 10:44:11.646839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.845 [2024-07-14 10:44:11.650569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.845 [2024-07-14 10:44:11.650781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.845 [2024-07-14 10:44:11.650800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.845 [2024-07-14 10:44:11.655412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.845 [2024-07-14 10:44:11.655627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.845 [2024-07-14 10:44:11.655647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.845 [2024-07-14 10:44:11.660694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.845 [2024-07-14 10:44:11.660926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.845 [2024-07-14 10:44:11.660946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.845 [2024-07-14 10:44:11.665286] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.845 [2024-07-14 10:44:11.665494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.845 [2024-07-14 10:44:11.665514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.845 [2024-07-14 10:44:11.669677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.845 [2024-07-14 10:44:11.669894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.845 [2024-07-14 10:44:11.669917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.845 [2024-07-14 10:44:11.673635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.845 [2024-07-14 10:44:11.673844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.845 [2024-07-14 10:44:11.673864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.845 [2024-07-14 10:44:11.677536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.845 [2024-07-14 10:44:11.677754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.845 [2024-07-14 10:44:11.677774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.845 [2024-07-14 10:44:11.681471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.845 [2024-07-14 10:44:11.681686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.845 [2024-07-14 10:44:11.681705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.845 [2024-07-14 10:44:11.685340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.845 [2024-07-14 10:44:11.685538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.845 [2024-07-14 10:44:11.685555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.845 [2024-07-14 10:44:11.689188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.845 [2024-07-14 10:44:11.689383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.845 [2024-07-14 10:44:11.689400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.845 [2024-07-14 10:44:11.693498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.845 [2024-07-14 10:44:11.693687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.845 [2024-07-14 10:44:11.693704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.845 [2024-07-14 10:44:11.698176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.845 [2024-07-14 10:44:11.698389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.845 [2024-07-14 10:44:11.698408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.845 [2024-07-14 10:44:11.702505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.845 [2024-07-14 10:44:11.702710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.845 [2024-07-14 10:44:11.702729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.845 [2024-07-14 10:44:11.706717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.845 [2024-07-14 10:44:11.706929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.845 [2024-07-14 10:44:11.706948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.845 [2024-07-14 10:44:11.711098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.846 [2024-07-14 10:44:11.711312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.846 [2024-07-14 10:44:11.711332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.846 [2024-07-14 10:44:11.715452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.846 [2024-07-14 10:44:11.715648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.846 [2024-07-14 10:44:11.715667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.846 [2024-07-14 10:44:11.719513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.846 [2024-07-14 10:44:11.719729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.846 [2024-07-14 10:44:11.719748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.846 [2024-07-14 10:44:11.723595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.846 [2024-07-14 10:44:11.723798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.846 [2024-07-14 10:44:11.723817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.846 [2024-07-14 10:44:11.727955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.846 [2024-07-14 10:44:11.728157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.846 [2024-07-14 10:44:11.728177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.846 [2024-07-14 10:44:11.732017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.846 [2024-07-14 10:44:11.732218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.846 [2024-07-14 10:44:11.732243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.846 [2024-07-14 10:44:11.736136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.846 [2024-07-14 10:44:11.736346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.846 [2024-07-14 10:44:11.736364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.846 [2024-07-14 10:44:11.740360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.846 [2024-07-14 10:44:11.740575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.846 [2024-07-14 10:44:11.740594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.846 [2024-07-14 10:44:11.744510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.846 [2024-07-14 10:44:11.744729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.846 [2024-07-14 10:44:11.744748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.846 [2024-07-14 10:44:11.748606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.846 [2024-07-14 10:44:11.748804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.846 [2024-07-14 10:44:11.748821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.846 [2024-07-14 10:44:11.752713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.846 [2024-07-14 10:44:11.752926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.846 [2024-07-14 10:44:11.752945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.846 [2024-07-14 10:44:11.757009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.846 [2024-07-14 10:44:11.757223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.846 [2024-07-14 10:44:11.757248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.846 [2024-07-14 10:44:11.761373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.846 [2024-07-14 10:44:11.761593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.846 [2024-07-14 10:44:11.761611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.846 [2024-07-14 10:44:11.765444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.846 [2024-07-14 10:44:11.765647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.846 [2024-07-14 10:44:11.765666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.846 [2024-07-14 10:44:11.769508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.846 [2024-07-14 10:44:11.769725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.846 [2024-07-14 10:44:11.769744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.846 [2024-07-14 10:44:11.773634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.846 [2024-07-14 10:44:11.773839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.846 [2024-07-14 10:44:11.773858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.846 [2024-07-14 10:44:11.777730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.846 [2024-07-14 10:44:11.777949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.846 [2024-07-14 10:44:11.777973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.846 [2024-07-14 10:44:11.781938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.846 [2024-07-14 10:44:11.782160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.846 [2024-07-14 10:44:11.782179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.846 [2024-07-14 10:44:11.785740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.846 [2024-07-14 10:44:11.785952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.846 [2024-07-14 10:44:11.785970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.846 [2024-07-14 10:44:11.789535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.846 [2024-07-14 10:44:11.789750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.846 [2024-07-14 10:44:11.789769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.846 [2024-07-14 10:44:11.793327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.846 [2024-07-14 10:44:11.793536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.846 [2024-07-14 10:44:11.793555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.846 [2024-07-14 10:44:11.797049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.846 [2024-07-14 10:44:11.797259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.846 [2024-07-14 10:44:11.797278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.846 [2024-07-14 10:44:11.800808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.846 [2024-07-14 10:44:11.801024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.846 [2024-07-14 10:44:11.801043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.846 [2024-07-14 10:44:11.804728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.846 [2024-07-14 10:44:11.804942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.846 [2024-07-14 10:44:11.804961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.846 [2024-07-14 10:44:11.809206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.846 [2024-07-14 10:44:11.809409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.846 [2024-07-14 10:44:11.809428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.846 [2024-07-14 10:44:11.814097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.846 [2024-07-14 10:44:11.814319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.846 [2024-07-14 10:44:11.814338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.846 [2024-07-14 10:44:11.818481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:26.846 [2024-07-14 10:44:11.818691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.846 [2024-07-14 10:44:11.818711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.846 [2024-07-14 10:44:11.822710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.107 [2024-07-14 10:44:11.822915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.107 [2024-07-14 10:44:11.822937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.107 [2024-07-14 10:44:11.826780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.107 [2024-07-14 10:44:11.827001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.107 [2024-07-14 10:44:11.827020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.107 [2024-07-14 10:44:11.830790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.108 [2024-07-14 10:44:11.831007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.108 [2024-07-14 10:44:11.831026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.108 [2024-07-14 10:44:11.834856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.108 [2024-07-14 10:44:11.835052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.108 [2024-07-14 10:44:11.835072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.108 [2024-07-14 10:44:11.839522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.108 [2024-07-14 10:44:11.839719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.108 [2024-07-14 10:44:11.839739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.108 [2024-07-14 10:44:11.844650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.108 [2024-07-14 10:44:11.844868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.108 [2024-07-14 10:44:11.844887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.108 [2024-07-14 10:44:11.849196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.108 [2024-07-14 10:44:11.849415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.108 [2024-07-14 10:44:11.849437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.108 [2024-07-14 10:44:11.853349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.108 [2024-07-14 10:44:11.853551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.108 [2024-07-14 10:44:11.853569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.108 [2024-07-14 10:44:11.857772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.108 [2024-07-14 10:44:11.857990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.108 [2024-07-14 10:44:11.858010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.108 [2024-07-14 10:44:11.862135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.108 [2024-07-14 10:44:11.862353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.108 [2024-07-14 10:44:11.862371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.108 [2024-07-14 10:44:11.866303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.108 [2024-07-14 10:44:11.866506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.108 [2024-07-14 10:44:11.866524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.108 [2024-07-14 10:44:11.870461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.108 [2024-07-14 10:44:11.870662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.108 [2024-07-14 10:44:11.870680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.108 [2024-07-14 10:44:11.874825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.108 [2024-07-14 10:44:11.875026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.108 [2024-07-14 10:44:11.875044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.108 [2024-07-14 10:44:11.878923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.108 [2024-07-14 10:44:11.879132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.108 [2024-07-14 10:44:11.879150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.108 [2024-07-14 10:44:11.883073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.108 [2024-07-14 10:44:11.883274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.108 [2024-07-14 10:44:11.883292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.108 [2024-07-14 10:44:11.887491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.108 [2024-07-14 10:44:11.887700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.108 [2024-07-14 10:44:11.887726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.108 [2024-07-14 10:44:11.891609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.108 [2024-07-14 10:44:11.891812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.108 [2024-07-14 10:44:11.891832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.108 [2024-07-14 10:44:11.895759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.108 [2024-07-14 10:44:11.895963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.108 [2024-07-14 10:44:11.895982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.108 [2024-07-14 10:44:11.900010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.108 [2024-07-14 10:44:11.900219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.108 [2024-07-14 10:44:11.900243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.108 [2024-07-14 10:44:11.904578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.108 [2024-07-14 10:44:11.904774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.108 [2024-07-14 10:44:11.904793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.108 [2024-07-14 10:44:11.908787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.108 [2024-07-14 10:44:11.909002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.108 [2024-07-14 10:44:11.909021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.108 [2024-07-14 10:44:11.912969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.108 [2024-07-14 10:44:11.913189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.108 [2024-07-14 10:44:11.913209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.108 [2024-07-14 10:44:11.917493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.108 [2024-07-14 10:44:11.917692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.108 [2024-07-14 10:44:11.917711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.108 [2024-07-14 10:44:11.922055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.108 [2024-07-14 10:44:11.922260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.108 [2024-07-14 10:44:11.922277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.108 [2024-07-14 10:44:11.926907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.108 [2024-07-14 10:44:11.927107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.108 [2024-07-14 10:44:11.927126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.108 [2024-07-14 10:44:11.932336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.108 [2024-07-14 10:44:11.932549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.108 [2024-07-14 10:44:11.932568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.108 [2024-07-14 10:44:11.937181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.108 [2024-07-14 10:44:11.937395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.108 [2024-07-14 10:44:11.937414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.108 [2024-07-14 10:44:11.941455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.108 [2024-07-14 10:44:11.941666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.108 [2024-07-14 10:44:11.941685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.108 [2024-07-14 10:44:11.945369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.108 [2024-07-14 10:44:11.945585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.108 [2024-07-14 10:44:11.945604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.108 [2024-07-14 10:44:11.949219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.108 [2024-07-14 10:44:11.949432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.108 [2024-07-14 10:44:11.949451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.108 [2024-07-14 10:44:11.953093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.108 [2024-07-14 10:44:11.953308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.108 [2024-07-14 10:44:11.953326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.108 [2024-07-14 10:44:11.957307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.109 [2024-07-14 10:44:11.957511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.109 [2024-07-14 10:44:11.957529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.109 [2024-07-14 10:44:11.962251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.109 [2024-07-14 10:44:11.962480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.109 [2024-07-14 10:44:11.962506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.109 [2024-07-14 10:44:11.967048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.109 [2024-07-14 10:44:11.967251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.109 [2024-07-14 10:44:11.967269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.109 [2024-07-14 10:44:11.971215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.109 [2024-07-14 10:44:11.971437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.109 [2024-07-14 10:44:11.971457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.109 [2024-07-14 10:44:11.975530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.109 [2024-07-14 10:44:11.975735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.109 [2024-07-14 10:44:11.975753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.109 [2024-07-14 10:44:11.979729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.109 [2024-07-14 10:44:11.979949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.109 [2024-07-14 10:44:11.979967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.109 [2024-07-14 10:44:11.983929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.109 [2024-07-14 10:44:11.984129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.109 [2024-07-14 10:44:11.984148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.109 [2024-07-14 10:44:11.987861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.109 [2024-07-14 10:44:11.988070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.109 [2024-07-14 10:44:11.988089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.109 [2024-07-14 10:44:11.992164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.109 [2024-07-14 10:44:11.992374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.109 [2024-07-14 10:44:11.992393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.109 [2024-07-14 10:44:11.997355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.109 [2024-07-14 10:44:11.997556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.109 [2024-07-14 10:44:11.997576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.109 [2024-07-14 10:44:12.002023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.109 [2024-07-14 10:44:12.002230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.109 [2024-07-14 10:44:12.002248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.109 [2024-07-14 10:44:12.006089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.109 [2024-07-14 10:44:12.006293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.109 [2024-07-14 10:44:12.006311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.109 [2024-07-14 10:44:12.010410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.109 [2024-07-14 10:44:12.010625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.109 [2024-07-14 10:44:12.010644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.109 [2024-07-14 10:44:12.014582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.109 [2024-07-14 10:44:12.014794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.109 [2024-07-14 10:44:12.014814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.109 [2024-07-14 10:44:12.018910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.109 [2024-07-14 10:44:12.019111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.109 [2024-07-14 10:44:12.019131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.109 [2024-07-14 10:44:12.023356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.109 [2024-07-14 10:44:12.023603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.109 [2024-07-14 10:44:12.023622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.109 [2024-07-14 10:44:12.029204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.109 [2024-07-14 10:44:12.029543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.109 [2024-07-14 10:44:12.029562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.109 [2024-07-14 10:44:12.035146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.109 [2024-07-14 10:44:12.035387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.109 [2024-07-14 10:44:12.035408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.109 [2024-07-14 10:44:12.042076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.109 [2024-07-14 10:44:12.042291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.109 [2024-07-14 10:44:12.042311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.109 [2024-07-14 10:44:12.049337] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.109 [2024-07-14 10:44:12.049567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.109 [2024-07-14 10:44:12.049587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.109 [2024-07-14 10:44:12.055640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.109 [2024-07-14 10:44:12.055915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.109 [2024-07-14 10:44:12.055934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.109 [2024-07-14 10:44:12.061827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.109 [2024-07-14 10:44:12.062085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.109 [2024-07-14 10:44:12.062104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.109 [2024-07-14 10:44:12.068538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.109 [2024-07-14 10:44:12.068772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.109 [2024-07-14 10:44:12.068790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.109 [2024-07-14 10:44:12.075360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.109 [2024-07-14 10:44:12.075639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.109 [2024-07-14 10:44:12.075658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.109 [2024-07-14 10:44:12.082230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.109 [2024-07-14 10:44:12.082530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.109 [2024-07-14 10:44:12.082549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.371 [2024-07-14 10:44:12.088861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.371 [2024-07-14 10:44:12.089108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.371 [2024-07-14 10:44:12.089128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.371 [2024-07-14 10:44:12.094969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.371 [2024-07-14 10:44:12.095197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.371 [2024-07-14 10:44:12.095217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.371 [2024-07-14 10:44:12.101809] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.371 [2024-07-14 10:44:12.102103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.371 [2024-07-14 10:44:12.102128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.371 [2024-07-14 10:44:12.108634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.371 [2024-07-14 10:44:12.108907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.371 [2024-07-14 10:44:12.108927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.371 [2024-07-14 10:44:12.114832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.371 [2024-07-14 10:44:12.115131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.371 [2024-07-14 10:44:12.115150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.371 [2024-07-14 10:44:12.120005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.371 [2024-07-14 10:44:12.120230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.371 [2024-07-14 10:44:12.120250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.371 [2024-07-14 10:44:12.124327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.371 [2024-07-14 10:44:12.124559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.371 [2024-07-14 10:44:12.124579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.371 [2024-07-14 10:44:12.128641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.371 [2024-07-14 10:44:12.128849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.371 [2024-07-14 10:44:12.128868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.371 [2024-07-14 10:44:12.132514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.371 [2024-07-14 10:44:12.132734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.371 [2024-07-14 10:44:12.132753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.371 [2024-07-14 10:44:12.136448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.371 [2024-07-14 10:44:12.136663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.371 [2024-07-14 10:44:12.136682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.371 [2024-07-14 10:44:12.140290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.371 [2024-07-14 10:44:12.140501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.371 [2024-07-14 10:44:12.140520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.371 [2024-07-14 10:44:12.144131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.371 [2024-07-14 10:44:12.144335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.371 [2024-07-14 10:44:12.144352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.371 [2024-07-14 10:44:12.148012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.371 [2024-07-14 10:44:12.148234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.371 [2024-07-14 10:44:12.148254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.371 [2024-07-14 10:44:12.151837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.371 [2024-07-14 10:44:12.152041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.371 [2024-07-14 10:44:12.152060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.371 [2024-07-14 10:44:12.155642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.371 [2024-07-14 10:44:12.155851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.371 [2024-07-14 10:44:12.155870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.371 [2024-07-14 10:44:12.159441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.371 [2024-07-14 10:44:12.159646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.371 [2024-07-14 10:44:12.159665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.371 [2024-07-14 10:44:12.163269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.371 [2024-07-14 10:44:12.163481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.371 [2024-07-14 10:44:12.163500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.371 [2024-07-14 10:44:12.167086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.371 [2024-07-14 10:44:12.167296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.371 [2024-07-14 10:44:12.167314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.371 [2024-07-14 10:44:12.170922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.371 [2024-07-14 10:44:12.171148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.371 [2024-07-14 10:44:12.171168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.372 [2024-07-14 10:44:12.174763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.372 [2024-07-14 10:44:12.174965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.372 [2024-07-14 10:44:12.174982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.372 [2024-07-14 10:44:12.178552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.372 [2024-07-14 10:44:12.178757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.372 [2024-07-14 10:44:12.178776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.372 [2024-07-14 10:44:12.182382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.372 [2024-07-14 10:44:12.182592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.372 [2024-07-14 10:44:12.182612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.372 [2024-07-14 10:44:12.186253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.372 [2024-07-14 10:44:12.186468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.372 [2024-07-14 10:44:12.186488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.372 [2024-07-14 10:44:12.190093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.372 [2024-07-14 10:44:12.190309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.372 [2024-07-14 10:44:12.190327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.372 [2024-07-14 10:44:12.193957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.372 [2024-07-14 10:44:12.194181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.372 [2024-07-14 10:44:12.194200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.372 [2024-07-14 10:44:12.197902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.372 [2024-07-14 10:44:12.198114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.372 [2024-07-14 10:44:12.198133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.372 [2024-07-14 10:44:12.201798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.372 [2024-07-14 10:44:12.201994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.372 [2024-07-14 10:44:12.202022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.372 [2024-07-14 10:44:12.205632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.372 [2024-07-14 10:44:12.205834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.372 [2024-07-14 10:44:12.205854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.372 [2024-07-14 10:44:12.209399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.372 [2024-07-14 10:44:12.209613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.372 [2024-07-14 10:44:12.209636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.372 [2024-07-14 10:44:12.213168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.372 [2024-07-14 10:44:12.213376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.372 [2024-07-14 10:44:12.213395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.372 [2024-07-14 10:44:12.216960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.372 [2024-07-14 10:44:12.217183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.372 [2024-07-14 10:44:12.217201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.372 [2024-07-14 10:44:12.220765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.372 [2024-07-14 10:44:12.220975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.372 [2024-07-14 10:44:12.220994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.372 [2024-07-14 10:44:12.224640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.372 [2024-07-14 10:44:12.224848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.372 [2024-07-14 10:44:12.224867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.372 [2024-07-14 10:44:12.228740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.372 [2024-07-14 10:44:12.228946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.372 [2024-07-14 10:44:12.228966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.372 [2024-07-14 10:44:12.232594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.372 [2024-07-14 10:44:12.232786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.372 [2024-07-14 10:44:12.232806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.372 [2024-07-14 10:44:12.236411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.372 [2024-07-14 10:44:12.236632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.372 [2024-07-14 10:44:12.236652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.372 [2024-07-14 10:44:12.240188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.372 [2024-07-14 10:44:12.240393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.372 [2024-07-14 10:44:12.240411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.372 [2024-07-14 10:44:12.243990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.372 [2024-07-14 10:44:12.244202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.372 [2024-07-14 10:44:12.244221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.372 [2024-07-14 10:44:12.247758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.372 [2024-07-14 10:44:12.247965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.372 [2024-07-14 10:44:12.247985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.372 [2024-07-14 10:44:12.251739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.372 [2024-07-14 10:44:12.251942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.372 [2024-07-14 10:44:12.251961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.372 [2024-07-14 10:44:12.256073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.372 [2024-07-14 10:44:12.256282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.372 [2024-07-14 10:44:12.256302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.372 [2024-07-14 10:44:12.259861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.372 [2024-07-14 10:44:12.260070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.372 [2024-07-14 10:44:12.260089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.372 [2024-07-14 10:44:12.263644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.372 [2024-07-14 10:44:12.263847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.372 [2024-07-14 10:44:12.263866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.372 [2024-07-14 10:44:12.267389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.372 [2024-07-14 10:44:12.267591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.372 [2024-07-14 10:44:12.267610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.372 [2024-07-14 10:44:12.271250] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.372 [2024-07-14 10:44:12.271453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.372 [2024-07-14 10:44:12.271473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.372 [2024-07-14 10:44:12.275047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.372 [2024-07-14 10:44:12.275262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.372 [2024-07-14 10:44:12.275284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.372 [2024-07-14 10:44:12.278824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.372 [2024-07-14 10:44:12.279032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.372 [2024-07-14 10:44:12.279051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.372 [2024-07-14 10:44:12.282571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.372 [2024-07-14 10:44:12.282783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.373 [2024-07-14 10:44:12.282802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.373 [2024-07-14 10:44:12.286616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.373 [2024-07-14 10:44:12.286813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.373 [2024-07-14 10:44:12.286832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.373 [2024-07-14 10:44:12.290796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.373 [2024-07-14 10:44:12.290999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.373 [2024-07-14 10:44:12.291025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.373 [2024-07-14 10:44:12.294949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.373 [2024-07-14 10:44:12.295149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.373 [2024-07-14 10:44:12.295168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.373 [2024-07-14 10:44:12.298742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.373 [2024-07-14 10:44:12.298932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.373 [2024-07-14 10:44:12.298949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.373 [2024-07-14 10:44:12.302505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.373 [2024-07-14 10:44:12.302722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.373 [2024-07-14 10:44:12.302742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.373 [2024-07-14 10:44:12.306257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.373 [2024-07-14 10:44:12.306468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.373 [2024-07-14 10:44:12.306488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.373 [2024-07-14 10:44:12.309963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.373 [2024-07-14 10:44:12.310190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.373 [2024-07-14 10:44:12.310209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.373 [2024-07-14 10:44:12.313688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.373 [2024-07-14 10:44:12.313903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.373 [2024-07-14 10:44:12.313921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.373 [2024-07-14 10:44:12.317419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.373 [2024-07-14 10:44:12.317627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.373 [2024-07-14 10:44:12.317646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.373 [2024-07-14 10:44:12.321159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.373 [2024-07-14 10:44:12.321355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.373 [2024-07-14 10:44:12.321372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.373 [2024-07-14 10:44:12.324873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.373 [2024-07-14 10:44:12.325082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.373 [2024-07-14 10:44:12.325102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.373 [2024-07-14 10:44:12.328592] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.373 [2024-07-14 10:44:12.328809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.373 [2024-07-14 10:44:12.328828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.373 [2024-07-14 10:44:12.332621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.373 [2024-07-14 10:44:12.332827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.373 [2024-07-14 10:44:12.332846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.373 [2024-07-14 10:44:12.336457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.373 [2024-07-14 10:44:12.336669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.373 [2024-07-14 10:44:12.336688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.373 [2024-07-14 10:44:12.340191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.373 [2024-07-14 10:44:12.340400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.373 [2024-07-14 10:44:12.340418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.373 [2024-07-14 10:44:12.343944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.373 [2024-07-14 10:44:12.344137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.373 [2024-07-14 10:44:12.344157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.373 [2024-07-14 10:44:12.347725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.373 [2024-07-14 10:44:12.347935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.373 [2024-07-14 10:44:12.347954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.633 [2024-07-14 10:44:12.351502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.633 [2024-07-14 10:44:12.351716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.633 [2024-07-14 10:44:12.351736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.633 [2024-07-14 10:44:12.355258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.634 [2024-07-14 10:44:12.355472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.634 [2024-07-14 10:44:12.355492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.634 [2024-07-14 10:44:12.359002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.634 [2024-07-14 10:44:12.359197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.634 [2024-07-14 10:44:12.359215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.634 [2024-07-14 10:44:12.362750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.634 [2024-07-14 10:44:12.362963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.634 [2024-07-14 10:44:12.362983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.634 [2024-07-14 10:44:12.366516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.634 [2024-07-14 10:44:12.366730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.634 [2024-07-14 10:44:12.366750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.634 [2024-07-14 10:44:12.370286] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.634 [2024-07-14 10:44:12.370502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.634 [2024-07-14 10:44:12.370521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.634 [2024-07-14 10:44:12.374034] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.634 [2024-07-14 10:44:12.374251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.634 [2024-07-14 10:44:12.374274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.634 [2024-07-14 10:44:12.377744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.634 [2024-07-14 10:44:12.377960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.634 [2024-07-14 10:44:12.377979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.634 [2024-07-14 10:44:12.381448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.634 [2024-07-14 10:44:12.381657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.634 [2024-07-14 10:44:12.381676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.634 [2024-07-14 10:44:12.385187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.634 [2024-07-14 10:44:12.385413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.634 [2024-07-14 10:44:12.385433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.634 [2024-07-14 10:44:12.388912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.634 [2024-07-14 10:44:12.389113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.634 [2024-07-14 10:44:12.389131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.634 [2024-07-14 10:44:12.392636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.634 [2024-07-14 10:44:12.392853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.634 [2024-07-14 10:44:12.392873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.634 [2024-07-14 10:44:12.396338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.634 [2024-07-14 10:44:12.396564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.634 [2024-07-14 10:44:12.396583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.634 [2024-07-14 10:44:12.400024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.634 [2024-07-14 10:44:12.400220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.634 [2024-07-14 10:44:12.400244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.634 [2024-07-14 10:44:12.403781] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.634 [2024-07-14 10:44:12.403994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.634 [2024-07-14 10:44:12.404012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.634 [2024-07-14 10:44:12.407517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.634 [2024-07-14 10:44:12.407739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.634 [2024-07-14 10:44:12.407759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.634 [2024-07-14 10:44:12.411442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.634 [2024-07-14 10:44:12.411644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.634 [2024-07-14 10:44:12.411663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.634 [2024-07-14 10:44:12.416010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.634 [2024-07-14 10:44:12.416212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.634 [2024-07-14 10:44:12.416236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.634 [2024-07-14 10:44:12.420873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.634 [2024-07-14 10:44:12.421094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.634 [2024-07-14 10:44:12.421114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.634 [2024-07-14 10:44:12.425559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.634 [2024-07-14 10:44:12.425768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.634 [2024-07-14 10:44:12.425788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.634 [2024-07-14 10:44:12.430535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.634 [2024-07-14 10:44:12.430769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.634 [2024-07-14 10:44:12.430788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.634 [2024-07-14 10:44:12.435467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.634 [2024-07-14 10:44:12.435709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.634 [2024-07-14 10:44:12.435729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.634 [2024-07-14 10:44:12.440890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.634 [2024-07-14 10:44:12.441090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.634 [2024-07-14 10:44:12.441108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.634 [2024-07-14 10:44:12.446024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.634 [2024-07-14 10:44:12.446222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.634 [2024-07-14 10:44:12.446246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.634 [2024-07-14 10:44:12.451039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.634 [2024-07-14 10:44:12.451248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.634 [2024-07-14 10:44:12.451266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.634 [2024-07-14 10:44:12.456498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.634 [2024-07-14 10:44:12.456715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.634 [2024-07-14 10:44:12.456734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.634 [2024-07-14 10:44:12.461319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.634 [2024-07-14 10:44:12.461531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.634 [2024-07-14 10:44:12.461549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.634 [2024-07-14 10:44:12.466677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.634 [2024-07-14 10:44:12.466874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.634 [2024-07-14 10:44:12.466892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.634 [2024-07-14 10:44:12.471615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.634 [2024-07-14 10:44:12.471818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.634 [2024-07-14 10:44:12.471835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.634 [2024-07-14 10:44:12.477078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.634 [2024-07-14 10:44:12.477284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.635 [2024-07-14 10:44:12.477303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.635 [2024-07-14 10:44:12.481916] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.635 [2024-07-14 10:44:12.482111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.635 [2024-07-14 10:44:12.482129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.635 [2024-07-14 10:44:12.486743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.635 [2024-07-14 10:44:12.486958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.635 [2024-07-14 10:44:12.486978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.635 [2024-07-14 10:44:12.491743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.635 [2024-07-14 10:44:12.491949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.635 [2024-07-14 10:44:12.491972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.635 [2024-07-14 10:44:12.497164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.635 [2024-07-14 10:44:12.497383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.635 [2024-07-14 10:44:12.497402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.635 [2024-07-14 10:44:12.502175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.635 [2024-07-14 10:44:12.502378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.635 [2024-07-14 10:44:12.502398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.635 [2024-07-14 10:44:12.507534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.635 [2024-07-14 10:44:12.507743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.635 [2024-07-14 10:44:12.507762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.635 [2024-07-14 10:44:12.512545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.635 [2024-07-14 10:44:12.512773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.635 [2024-07-14 10:44:12.512792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.635 [2024-07-14 10:44:12.517769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.635 [2024-07-14 10:44:12.517971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.635 [2024-07-14 10:44:12.517988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.635 [2024-07-14 10:44:12.522784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.635 [2024-07-14 10:44:12.522993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.635 [2024-07-14 10:44:12.523012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.635 [2024-07-14 10:44:12.526947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.635 [2024-07-14 10:44:12.527149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.635 [2024-07-14 10:44:12.527166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.635 [2024-07-14 10:44:12.530951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.635 [2024-07-14 10:44:12.531160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.635 [2024-07-14 10:44:12.531179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.635 [2024-07-14 10:44:12.535387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.635 [2024-07-14 10:44:12.535592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.635 [2024-07-14 10:44:12.535611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.635 [2024-07-14 10:44:12.539488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.635 [2024-07-14 10:44:12.539694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.635 [2024-07-14 10:44:12.539713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.635 [2024-07-14 10:44:12.543333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.635 [2024-07-14 10:44:12.543543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.635 [2024-07-14 10:44:12.543562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.635 [2024-07-14 10:44:12.547144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.635 [2024-07-14 10:44:12.547369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.635 [2024-07-14 10:44:12.547388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.635 [2024-07-14 10:44:12.550924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.635 [2024-07-14 10:44:12.551139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.635 [2024-07-14 10:44:12.551158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.635 [2024-07-14 10:44:12.554752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.635 [2024-07-14 10:44:12.554953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.635 [2024-07-14 10:44:12.554972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.635 [2024-07-14 10:44:12.558492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.635 [2024-07-14 10:44:12.558722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.635 [2024-07-14 10:44:12.558742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.635 [2024-07-14 10:44:12.562254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.635 [2024-07-14 10:44:12.562464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.635 [2024-07-14 10:44:12.562482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.635 [2024-07-14 10:44:12.566007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.635 [2024-07-14 10:44:12.566232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.635 [2024-07-14 10:44:12.566255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.635 [2024-07-14 10:44:12.569806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.635 [2024-07-14 10:44:12.570042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.635 [2024-07-14 10:44:12.570061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.635 [2024-07-14 10:44:12.573560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.635 [2024-07-14 10:44:12.573770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.635 [2024-07-14 10:44:12.573789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.635 [2024-07-14 10:44:12.577361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.635 [2024-07-14 10:44:12.577594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.635 [2024-07-14 10:44:12.577613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.635 [2024-07-14 10:44:12.581048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x173cdd0) with pdu=0x2000190fef90 00:35:27.635 [2024-07-14 10:44:12.581153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.635 [2024-07-14 10:44:12.581170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.635 00:35:27.635 Latency(us) 00:35:27.635 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:27.635 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:27.635 nvme0n1 : 2.00 6910.26 863.78 0.00 0.00 2311.61 1659.77 12822.26 00:35:27.635 =================================================================================================================== 00:35:27.635 Total : 6910.26 863.78 0.00 0.00 2311.61 1659.77 12822.26 00:35:27.635 0 00:35:27.635 10:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:27.635 10:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:27.635 10:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:27.635 | .driver_specific 00:35:27.635 | .nvme_error 00:35:27.635 | .status_code 00:35:27.635 | .command_transient_transport_error' 00:35:27.635 10:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:27.895 10:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 446 > 0 )) 00:35:27.895 10:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2618784 00:35:27.895 10:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2618784 ']' 00:35:27.895 10:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2618784 00:35:27.895 10:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:35:27.895 10:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:27.895 10:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2618784 00:35:27.895 10:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:27.895 10:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:27.895 10:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2618784' 00:35:27.895 killing process with pid 2618784 00:35:27.895 10:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2618784 00:35:27.895 Received shutdown signal, test time was about 2.000000 seconds 00:35:27.895 00:35:27.895 Latency(us) 00:35:27.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:27.895 =================================================================================================================== 00:35:27.895 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:27.895 10:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2618784 00:35:28.154 10:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2617153 00:35:28.154 10:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2617153 ']' 00:35:28.154 10:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2617153 00:35:28.154 10:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:35:28.154 10:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:28.154 10:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2617153 00:35:28.154 10:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:28.154 10:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:28.154 10:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2617153' 00:35:28.154 killing process with pid 2617153 00:35:28.154 10:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2617153 00:35:28.154 10:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2617153 00:35:28.413 00:35:28.413 real 0m13.542s 00:35:28.413 user 0m25.466s 00:35:28.413 sys 0m4.643s 00:35:28.413 10:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:28.413 10:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:28.413 ************************************ 00:35:28.413 END TEST nvmf_digest_error 00:35:28.413 ************************************ 00:35:28.413 10:44:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:35:28.413 10:44:13 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:28.413 10:44:13 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:28.413 10:44:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:28.413 10:44:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:35:28.413 10:44:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:28.413 10:44:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:35:28.413 10:44:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:28.413 10:44:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:28.413 rmmod nvme_tcp 00:35:28.413 rmmod nvme_fabrics 00:35:28.413 rmmod nvme_keyring 00:35:28.413 10:44:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:28.413 10:44:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:35:28.413 10:44:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:35:28.413 10:44:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2617153 ']' 00:35:28.413 10:44:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2617153 00:35:28.413 10:44:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 2617153 ']' 00:35:28.413 10:44:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 2617153 00:35:28.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2617153) - No such process 00:35:28.413 10:44:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 2617153 is not found' 00:35:28.413 Process with pid 2617153 is not found 00:35:28.413 10:44:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:28.413 10:44:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:28.413 10:44:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:28.413 10:44:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:28.413 10:44:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:28.413 10:44:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:28.413 10:44:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:28.413 10:44:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:30.950 10:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:30.950 00:35:30.950 real 0m36.033s 00:35:30.950 user 0m54.026s 00:35:30.950 sys 0m13.565s 00:35:30.950 10:44:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:30.950 10:44:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:30.950 ************************************ 00:35:30.950 END TEST nvmf_digest 00:35:30.950 ************************************ 00:35:30.950 10:44:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:35:30.950 10:44:15 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:35:30.950 10:44:15 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:35:30.950 10:44:15 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:35:30.950 10:44:15 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:30.950 10:44:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:30.950 10:44:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:30.950 10:44:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:30.950 ************************************ 00:35:30.950 START TEST nvmf_bdevperf 00:35:30.950 ************************************ 00:35:30.950 10:44:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:30.950 * Looking for test storage... 00:35:30.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:30.950 10:44:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:30.950 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:35:30.951 10:44:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:36.227 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:36.227 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:35:36.227 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:36.227 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:36.227 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:36.227 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:36.227 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:36.227 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:35:36.227 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:36.227 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:35:36.227 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:35:36.227 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:35:36.227 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:35:36.227 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:35:36.227 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:35:36.227 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:36.227 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:36.227 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:36.227 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:36.227 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:36.227 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:36.227 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:36.227 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:36.227 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:36.227 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:36.228 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:36.228 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:36.228 Found net devices under 0000:86:00.0: cvl_0_0 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:36.228 Found net devices under 0000:86:00.1: cvl_0_1 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:36.228 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:36.487 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:36.487 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:36.487 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:36.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:36.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:35:36.487 00:35:36.487 --- 10.0.0.2 ping statistics --- 00:35:36.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:36.487 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:35:36.487 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:36.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:36.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:35:36.487 00:35:36.487 --- 10.0.0.1 ping statistics --- 00:35:36.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:36.487 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:35:36.487 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:36.487 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:35:36.487 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:36.487 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:36.487 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:36.487 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:36.487 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:36.487 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:36.487 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:36.487 10:44:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:36.487 10:44:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:36.487 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:36.487 10:44:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:36.487 10:44:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:36.487 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2622774 00:35:36.487 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2622774 00:35:36.488 10:44:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:36.488 10:44:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2622774 ']' 00:35:36.488 10:44:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:36.488 10:44:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:36.488 10:44:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:36.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:36.488 10:44:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:36.488 10:44:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:36.488 [2024-07-14 10:44:21.363138] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:36.488 [2024-07-14 10:44:21.363186] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:36.488 EAL: No free 2048 kB hugepages reported on node 1 00:35:36.488 [2024-07-14 10:44:21.435606] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:36.747 [2024-07-14 10:44:21.477473] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:36.747 [2024-07-14 10:44:21.477514] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:36.747 [2024-07-14 10:44:21.477524] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:36.747 [2024-07-14 10:44:21.477530] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:36.747 [2024-07-14 10:44:21.477536] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:36.747 [2024-07-14 10:44:21.477666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:36.747 [2024-07-14 10:44:21.477771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:36.747 [2024-07-14 10:44:21.477773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:37.316 [2024-07-14 10:44:22.217432] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:37.316 Malloc0 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:37.316 [2024-07-14 10:44:22.284615] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:37.316 { 00:35:37.316 "params": { 00:35:37.316 "name": "Nvme$subsystem", 00:35:37.316 "trtype": "$TEST_TRANSPORT", 00:35:37.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:37.316 "adrfam": "ipv4", 00:35:37.316 "trsvcid": "$NVMF_PORT", 00:35:37.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:37.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:37.316 "hdgst": ${hdgst:-false}, 00:35:37.316 "ddgst": ${ddgst:-false} 00:35:37.316 }, 00:35:37.316 "method": "bdev_nvme_attach_controller" 00:35:37.316 } 00:35:37.316 EOF 00:35:37.316 )") 00:35:37.316 10:44:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:35:37.575 10:44:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:35:37.575 10:44:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:35:37.575 10:44:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:37.575 "params": { 00:35:37.575 "name": "Nvme1", 00:35:37.575 "trtype": "tcp", 00:35:37.575 "traddr": "10.0.0.2", 00:35:37.575 "adrfam": "ipv4", 00:35:37.575 "trsvcid": "4420", 00:35:37.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:37.575 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:37.575 "hdgst": false, 00:35:37.575 "ddgst": false 00:35:37.575 }, 00:35:37.575 "method": "bdev_nvme_attach_controller" 00:35:37.575 }' 00:35:37.575 [2024-07-14 10:44:22.335579] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:37.575 [2024-07-14 10:44:22.335622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2623024 ] 00:35:37.575 EAL: No free 2048 kB hugepages reported on node 1 00:35:37.575 [2024-07-14 10:44:22.402437] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:37.575 [2024-07-14 10:44:22.442367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:37.834 Running I/O for 1 seconds... 00:35:38.768 00:35:38.768 Latency(us) 00:35:38.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:38.768 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:38.768 Verification LBA range: start 0x0 length 0x4000 00:35:38.768 Nvme1n1 : 1.00 10948.84 42.77 0.00 0.00 11632.55 961.67 14930.81 00:35:38.768 =================================================================================================================== 00:35:38.768 Total : 10948.84 42.77 0.00 0.00 11632.55 961.67 14930.81 00:35:39.027 10:44:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2623257 00:35:39.027 10:44:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:39.027 10:44:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:39.027 10:44:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:39.027 10:44:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:35:39.027 10:44:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:35:39.027 10:44:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:39.027 10:44:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:39.027 { 00:35:39.027 "params": { 00:35:39.027 "name": "Nvme$subsystem", 00:35:39.027 "trtype": "$TEST_TRANSPORT", 00:35:39.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:39.027 "adrfam": "ipv4", 00:35:39.027 "trsvcid": "$NVMF_PORT", 00:35:39.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:39.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:39.027 "hdgst": ${hdgst:-false}, 00:35:39.027 "ddgst": ${ddgst:-false} 00:35:39.027 }, 00:35:39.027 "method": "bdev_nvme_attach_controller" 00:35:39.027 } 00:35:39.027 EOF 00:35:39.027 )") 00:35:39.027 10:44:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:35:39.027 10:44:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:35:39.027 10:44:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:35:39.027 10:44:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:39.027 "params": { 00:35:39.027 "name": "Nvme1", 00:35:39.027 "trtype": "tcp", 00:35:39.027 "traddr": "10.0.0.2", 00:35:39.027 "adrfam": "ipv4", 00:35:39.027 "trsvcid": "4420", 00:35:39.027 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:39.027 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:39.027 "hdgst": false, 00:35:39.027 "ddgst": false 00:35:39.027 }, 00:35:39.027 "method": "bdev_nvme_attach_controller" 00:35:39.027 }' 00:35:39.027 [2024-07-14 10:44:23.861426] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:39.027 [2024-07-14 10:44:23.861467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2623257 ] 00:35:39.027 EAL: No free 2048 kB hugepages reported on node 1 00:35:39.027 [2024-07-14 10:44:23.930169] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:39.027 [2024-07-14 10:44:23.967430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:39.291 Running I/O for 15 seconds... 00:35:41.867 10:44:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2622774 00:35:41.867 10:44:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:41.867 [2024-07-14 10:44:26.830479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.830518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.830538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.830547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.830557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.830566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.830575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.830582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.830592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.830600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.830608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.830616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.830624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.830634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.830643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.830651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.830660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.830668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.830681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.830688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.830698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.830705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.830718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.830725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.830733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.830740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.830751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.830761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.830771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.830780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.830790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.830798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.830812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.830820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.830835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.830844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.830856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.830865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.830875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.830883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.830891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.830898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.830907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.830914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.830922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.830928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.830936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.830944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.830953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.830959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.830968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.830974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.830981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.830988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.830996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.831002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.831010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.831016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.831024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.831030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.831038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.831044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.831052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.831058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.831066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.867 [2024-07-14 10:44:26.831072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.867 [2024-07-14 10:44:26.831088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.868 [2024-07-14 10:44:26.831707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.868 [2024-07-14 10:44:26.831713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.831722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.831728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.831736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.831742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.831750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.831756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.831764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.831771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.831779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.831786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.831794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.831801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.831808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.831815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.831824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.831830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.831839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.831845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.831853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.831859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.831867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.831873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.831883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.831890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.831897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.831904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.831911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.831918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.831925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.831933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.831941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.831947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.831955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.831963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.831971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.831977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.831985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.831992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.831999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.832006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.832013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.832020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.832029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.832036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.832044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.832051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.832058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.832068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.832076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.832082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.832090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.832097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.832106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.832112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.832120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.832126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.832134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.832140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.832148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.832155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.832163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.832169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.832177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.832184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.832191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.832200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.832208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.832215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.832222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.869 [2024-07-14 10:44:26.832346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.832355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.869 [2024-07-14 10:44:26.832361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.832372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.869 [2024-07-14 10:44:26.832378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.832387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.869 [2024-07-14 10:44:26.832393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.832402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.869 [2024-07-14 10:44:26.832409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.832417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.869 [2024-07-14 10:44:26.832423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.832431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.869 [2024-07-14 10:44:26.832437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.832445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.869 [2024-07-14 10:44:26.832451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.869 [2024-07-14 10:44:26.832460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.869 [2024-07-14 10:44:26.832466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.870 [2024-07-14 10:44:26.832474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.870 [2024-07-14 10:44:26.832480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.870 [2024-07-14 10:44:26.832488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.870 [2024-07-14 10:44:26.832494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.870 [2024-07-14 10:44:26.832502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.870 [2024-07-14 10:44:26.832509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.870 [2024-07-14 10:44:26.832517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.870 [2024-07-14 10:44:26.832524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.870 [2024-07-14 10:44:26.832531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.870 [2024-07-14 10:44:26.832537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.870 [2024-07-14 10:44:26.832545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.870 [2024-07-14 10:44:26.832556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.870 [2024-07-14 10:44:26.832565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.870 [2024-07-14 10:44:26.832572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.870 [2024-07-14 10:44:26.832579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:41.870 [2024-07-14 10:44:26.832585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.870 [2024-07-14 10:44:26.832592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f84d0 is same with the state(5) to be set 00:35:41.870 [2024-07-14 10:44:26.832601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:41.870 [2024-07-14 10:44:26.832607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:41.870 [2024-07-14 10:44:26.832613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97784 len:8 PRP1 0x0 PRP2 0x0 00:35:41.870 [2024-07-14 10:44:26.832620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:41.870 [2024-07-14 10:44:26.832663] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20f84d0 was disconnected and freed. reset controller. 00:35:41.870 [2024-07-14 10:44:26.835486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.870 [2024-07-14 10:44:26.835537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:41.870 [2024-07-14 10:44:26.836145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.870 [2024-07-14 10:44:26.836160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:41.870 [2024-07-14 10:44:26.836168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:41.870 [2024-07-14 10:44:26.836354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:41.870 [2024-07-14 10:44:26.836533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.870 [2024-07-14 10:44:26.836541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.870 [2024-07-14 10:44:26.836549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.870 [2024-07-14 10:44:26.839381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.131 [2024-07-14 10:44:26.848904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.131 [2024-07-14 10:44:26.849343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.131 [2024-07-14 10:44:26.849388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.131 [2024-07-14 10:44:26.849409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.131 [2024-07-14 10:44:26.849988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.131 [2024-07-14 10:44:26.850462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.131 [2024-07-14 10:44:26.850472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.131 [2024-07-14 10:44:26.850480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.131 [2024-07-14 10:44:26.853315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.131 [2024-07-14 10:44:26.861710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.131 [2024-07-14 10:44:26.862154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.131 [2024-07-14 10:44:26.862197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.131 [2024-07-14 10:44:26.862219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.131 [2024-07-14 10:44:26.862614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.131 [2024-07-14 10:44:26.862788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.131 [2024-07-14 10:44:26.862797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.131 [2024-07-14 10:44:26.862804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.132 [2024-07-14 10:44:26.865538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.132 [2024-07-14 10:44:26.874514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.132 [2024-07-14 10:44:26.874861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.132 [2024-07-14 10:44:26.874878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.132 [2024-07-14 10:44:26.874885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.132 [2024-07-14 10:44:26.875048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.132 [2024-07-14 10:44:26.875211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.132 [2024-07-14 10:44:26.875221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.132 [2024-07-14 10:44:26.875233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.132 [2024-07-14 10:44:26.877915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.132 [2024-07-14 10:44:26.887394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.132 [2024-07-14 10:44:26.887823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.132 [2024-07-14 10:44:26.887859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.132 [2024-07-14 10:44:26.887882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.132 [2024-07-14 10:44:26.888434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.132 [2024-07-14 10:44:26.888609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.132 [2024-07-14 10:44:26.888619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.132 [2024-07-14 10:44:26.888625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.132 [2024-07-14 10:44:26.891277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.132 [2024-07-14 10:44:26.900246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.132 [2024-07-14 10:44:26.900652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.132 [2024-07-14 10:44:26.900668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.132 [2024-07-14 10:44:26.900678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.132 [2024-07-14 10:44:26.900840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.132 [2024-07-14 10:44:26.901003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.132 [2024-07-14 10:44:26.901012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.132 [2024-07-14 10:44:26.901018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.132 [2024-07-14 10:44:26.903708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.132 [2024-07-14 10:44:26.913075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.132 [2024-07-14 10:44:26.913523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.132 [2024-07-14 10:44:26.913566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.132 [2024-07-14 10:44:26.913589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.132 [2024-07-14 10:44:26.914172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.132 [2024-07-14 10:44:26.914351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.132 [2024-07-14 10:44:26.914361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.132 [2024-07-14 10:44:26.914367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.132 [2024-07-14 10:44:26.917028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.132 [2024-07-14 10:44:26.925988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.132 [2024-07-14 10:44:26.926415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.132 [2024-07-14 10:44:26.926431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.132 [2024-07-14 10:44:26.926438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.132 [2024-07-14 10:44:26.926601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.132 [2024-07-14 10:44:26.926764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.132 [2024-07-14 10:44:26.926773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.132 [2024-07-14 10:44:26.926779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.132 [2024-07-14 10:44:26.929520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.132 [2024-07-14 10:44:26.938859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.132 [2024-07-14 10:44:26.939288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.132 [2024-07-14 10:44:26.939304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.132 [2024-07-14 10:44:26.939311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.132 [2024-07-14 10:44:26.939473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.132 [2024-07-14 10:44:26.939636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.132 [2024-07-14 10:44:26.939648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.132 [2024-07-14 10:44:26.939654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.132 [2024-07-14 10:44:26.942348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.132 [2024-07-14 10:44:26.951778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.132 [2024-07-14 10:44:26.952200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.132 [2024-07-14 10:44:26.952216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.132 [2024-07-14 10:44:26.952223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.132 [2024-07-14 10:44:26.952414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.132 [2024-07-14 10:44:26.952587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.132 [2024-07-14 10:44:26.952596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.132 [2024-07-14 10:44:26.952603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.132 [2024-07-14 10:44:26.955257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.132 [2024-07-14 10:44:26.964685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.132 [2024-07-14 10:44:26.965029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.132 [2024-07-14 10:44:26.965072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.132 [2024-07-14 10:44:26.965094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.132 [2024-07-14 10:44:26.965599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.132 [2024-07-14 10:44:26.965772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.132 [2024-07-14 10:44:26.965782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.132 [2024-07-14 10:44:26.965788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.132 [2024-07-14 10:44:26.968537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.133 [2024-07-14 10:44:26.977636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.133 [2024-07-14 10:44:26.978083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.133 [2024-07-14 10:44:26.978125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.133 [2024-07-14 10:44:26.978147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.133 [2024-07-14 10:44:26.978679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.133 [2024-07-14 10:44:26.978854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.133 [2024-07-14 10:44:26.978863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.133 [2024-07-14 10:44:26.978869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.133 [2024-07-14 10:44:26.981507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.133 [2024-07-14 10:44:26.990421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.133 [2024-07-14 10:44:26.990843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.133 [2024-07-14 10:44:26.990860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.133 [2024-07-14 10:44:26.990866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.133 [2024-07-14 10:44:26.991029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.133 [2024-07-14 10:44:26.991192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.133 [2024-07-14 10:44:26.991200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.133 [2024-07-14 10:44:26.991207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.133 [2024-07-14 10:44:26.993893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.133 [2024-07-14 10:44:27.003315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.133 [2024-07-14 10:44:27.003743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.133 [2024-07-14 10:44:27.003785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.133 [2024-07-14 10:44:27.003807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.133 [2024-07-14 10:44:27.004398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.133 [2024-07-14 10:44:27.004883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.133 [2024-07-14 10:44:27.004892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.133 [2024-07-14 10:44:27.004898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.133 [2024-07-14 10:44:27.007534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.133 [2024-07-14 10:44:27.016103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.133 [2024-07-14 10:44:27.016535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.133 [2024-07-14 10:44:27.016552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.133 [2024-07-14 10:44:27.016558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.133 [2024-07-14 10:44:27.016720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.133 [2024-07-14 10:44:27.016883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.133 [2024-07-14 10:44:27.016892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.133 [2024-07-14 10:44:27.016898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.133 [2024-07-14 10:44:27.019589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.133 [2024-07-14 10:44:27.028909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.133 [2024-07-14 10:44:27.029333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.133 [2024-07-14 10:44:27.029350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.133 [2024-07-14 10:44:27.029356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.133 [2024-07-14 10:44:27.029522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.133 [2024-07-14 10:44:27.029686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.133 [2024-07-14 10:44:27.029695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.133 [2024-07-14 10:44:27.029701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.133 [2024-07-14 10:44:27.032388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.133 [2024-07-14 10:44:27.041746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.133 [2024-07-14 10:44:27.042094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.133 [2024-07-14 10:44:27.042112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.133 [2024-07-14 10:44:27.042118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.133 [2024-07-14 10:44:27.042302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.133 [2024-07-14 10:44:27.042476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.133 [2024-07-14 10:44:27.042486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.133 [2024-07-14 10:44:27.042492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.133 [2024-07-14 10:44:27.045151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.133 [2024-07-14 10:44:27.054578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.133 [2024-07-14 10:44:27.054908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.133 [2024-07-14 10:44:27.054924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.133 [2024-07-14 10:44:27.054932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.133 [2024-07-14 10:44:27.055094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.133 [2024-07-14 10:44:27.055262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.133 [2024-07-14 10:44:27.055272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.133 [2024-07-14 10:44:27.055294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.133 [2024-07-14 10:44:27.058028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.133 [2024-07-14 10:44:27.067471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.133 [2024-07-14 10:44:27.067816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.133 [2024-07-14 10:44:27.067832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.133 [2024-07-14 10:44:27.067838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.133 [2024-07-14 10:44:27.068000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.133 [2024-07-14 10:44:27.068163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.133 [2024-07-14 10:44:27.068172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.133 [2024-07-14 10:44:27.068181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.133 [2024-07-14 10:44:27.070869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.133 [2024-07-14 10:44:27.080292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.133 [2024-07-14 10:44:27.080702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.133 [2024-07-14 10:44:27.080718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.133 [2024-07-14 10:44:27.080725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.133 [2024-07-14 10:44:27.080887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.133 [2024-07-14 10:44:27.081050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.134 [2024-07-14 10:44:27.081059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.134 [2024-07-14 10:44:27.081065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.134 [2024-07-14 10:44:27.083818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.134 [2024-07-14 10:44:27.093397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.134 [2024-07-14 10:44:27.093874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.134 [2024-07-14 10:44:27.093916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.134 [2024-07-14 10:44:27.093938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.134 [2024-07-14 10:44:27.094530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.134 [2024-07-14 10:44:27.094737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.134 [2024-07-14 10:44:27.094747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.134 [2024-07-14 10:44:27.094753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.134 [2024-07-14 10:44:27.097616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.134 [2024-07-14 10:44:27.106335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.134 [2024-07-14 10:44:27.106777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.134 [2024-07-14 10:44:27.106794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.134 [2024-07-14 10:44:27.106802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.134 [2024-07-14 10:44:27.106973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.134 [2024-07-14 10:44:27.107148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.134 [2024-07-14 10:44:27.107157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.134 [2024-07-14 10:44:27.107164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.395 [2024-07-14 10:44:27.109905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.395 [2024-07-14 10:44:27.119266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.395 [2024-07-14 10:44:27.119699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-07-14 10:44:27.119741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.395 [2024-07-14 10:44:27.119763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.395 [2024-07-14 10:44:27.120192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.395 [2024-07-14 10:44:27.120363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.395 [2024-07-14 10:44:27.120373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.395 [2024-07-14 10:44:27.120379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.395 [2024-07-14 10:44:27.123139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.395 [2024-07-14 10:44:27.132197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.395 [2024-07-14 10:44:27.132645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-07-14 10:44:27.132688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.395 [2024-07-14 10:44:27.132711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.395 [2024-07-14 10:44:27.133304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.395 [2024-07-14 10:44:27.133887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.395 [2024-07-14 10:44:27.133925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.395 [2024-07-14 10:44:27.133932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.395 [2024-07-14 10:44:27.136649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.395 [2024-07-14 10:44:27.145005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.395 [2024-07-14 10:44:27.145431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-07-14 10:44:27.145447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.395 [2024-07-14 10:44:27.145454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.395 [2024-07-14 10:44:27.145616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.395 [2024-07-14 10:44:27.145778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.395 [2024-07-14 10:44:27.145787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.395 [2024-07-14 10:44:27.145793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.395 [2024-07-14 10:44:27.148486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.395 [2024-07-14 10:44:27.157961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.395 [2024-07-14 10:44:27.158322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-07-14 10:44:27.158338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.395 [2024-07-14 10:44:27.158345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.395 [2024-07-14 10:44:27.158529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.395 [2024-07-14 10:44:27.158694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.395 [2024-07-14 10:44:27.158703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.395 [2024-07-14 10:44:27.158708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.395 [2024-07-14 10:44:27.161397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.395 [2024-07-14 10:44:27.170880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.395 [2024-07-14 10:44:27.171297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-07-14 10:44:27.171313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.395 [2024-07-14 10:44:27.171320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.395 [2024-07-14 10:44:27.171483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.395 [2024-07-14 10:44:27.171646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.395 [2024-07-14 10:44:27.171655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.395 [2024-07-14 10:44:27.171662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.395 [2024-07-14 10:44:27.174349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.395 [2024-07-14 10:44:27.183772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.395 [2024-07-14 10:44:27.184197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.395 [2024-07-14 10:44:27.184260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.396 [2024-07-14 10:44:27.184282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.396 [2024-07-14 10:44:27.184861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.396 [2024-07-14 10:44:27.185068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.396 [2024-07-14 10:44:27.185077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.396 [2024-07-14 10:44:27.185083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.396 [2024-07-14 10:44:27.187773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.396 [2024-07-14 10:44:27.196699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.396 [2024-07-14 10:44:27.197151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-07-14 10:44:27.197193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.396 [2024-07-14 10:44:27.197215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.396 [2024-07-14 10:44:27.197807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.396 [2024-07-14 10:44:27.198306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.396 [2024-07-14 10:44:27.198316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.396 [2024-07-14 10:44:27.198325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.396 [2024-07-14 10:44:27.200937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.396 [2024-07-14 10:44:27.209600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.396 [2024-07-14 10:44:27.210024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-07-14 10:44:27.210079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.396 [2024-07-14 10:44:27.210101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.396 [2024-07-14 10:44:27.210694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.396 [2024-07-14 10:44:27.211283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.396 [2024-07-14 10:44:27.211301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.396 [2024-07-14 10:44:27.211316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.396 [2024-07-14 10:44:27.217547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.396 [2024-07-14 10:44:27.224546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.396 [2024-07-14 10:44:27.224997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-07-14 10:44:27.225040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.396 [2024-07-14 10:44:27.225062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.396 [2024-07-14 10:44:27.225655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.396 [2024-07-14 10:44:27.226243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.396 [2024-07-14 10:44:27.226257] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.396 [2024-07-14 10:44:27.226266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.396 [2024-07-14 10:44:27.230318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.396 [2024-07-14 10:44:27.237424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.396 [2024-07-14 10:44:27.237862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-07-14 10:44:27.237904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.396 [2024-07-14 10:44:27.237926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.396 [2024-07-14 10:44:27.238329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.396 [2024-07-14 10:44:27.238503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.396 [2024-07-14 10:44:27.238512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.396 [2024-07-14 10:44:27.238519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.396 [2024-07-14 10:44:27.241262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.396 [2024-07-14 10:44:27.250313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.396 [2024-07-14 10:44:27.250732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-07-14 10:44:27.250751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.396 [2024-07-14 10:44:27.250758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.396 [2024-07-14 10:44:27.250920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.396 [2024-07-14 10:44:27.251083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.396 [2024-07-14 10:44:27.251092] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.396 [2024-07-14 10:44:27.251098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.396 [2024-07-14 10:44:27.253981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.396 [2024-07-14 10:44:27.263163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.396 [2024-07-14 10:44:27.263589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-07-14 10:44:27.263624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.396 [2024-07-14 10:44:27.263647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.396 [2024-07-14 10:44:27.264220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.396 [2024-07-14 10:44:27.264415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.396 [2024-07-14 10:44:27.264424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.396 [2024-07-14 10:44:27.264431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.396 [2024-07-14 10:44:27.267093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.396 [2024-07-14 10:44:27.276064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.396 [2024-07-14 10:44:27.276485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.396 [2024-07-14 10:44:27.276528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.396 [2024-07-14 10:44:27.276551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.396 [2024-07-14 10:44:27.277099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.396 [2024-07-14 10:44:27.277279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.396 [2024-07-14 10:44:27.277289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.396 [2024-07-14 10:44:27.277296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.396 [2024-07-14 10:44:27.279961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.396 [2024-07-14 10:44:27.288917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.397 [2024-07-14 10:44:27.289265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-07-14 10:44:27.289282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.397 [2024-07-14 10:44:27.289290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.397 [2024-07-14 10:44:27.289452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.397 [2024-07-14 10:44:27.289618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.397 [2024-07-14 10:44:27.289628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.397 [2024-07-14 10:44:27.289634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.397 [2024-07-14 10:44:27.292222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.397 [2024-07-14 10:44:27.301752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.397 [2024-07-14 10:44:27.302098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-07-14 10:44:27.302114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.397 [2024-07-14 10:44:27.302121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.397 [2024-07-14 10:44:27.302308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.397 [2024-07-14 10:44:27.302481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.397 [2024-07-14 10:44:27.302490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.397 [2024-07-14 10:44:27.302496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.397 [2024-07-14 10:44:27.305148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.397 [2024-07-14 10:44:27.314610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.397 [2024-07-14 10:44:27.314942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-07-14 10:44:27.314959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.397 [2024-07-14 10:44:27.314965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.397 [2024-07-14 10:44:27.315128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.397 [2024-07-14 10:44:27.315314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.397 [2024-07-14 10:44:27.315324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.397 [2024-07-14 10:44:27.315331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.397 [2024-07-14 10:44:27.317992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.397 [2024-07-14 10:44:27.327519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.397 [2024-07-14 10:44:27.327859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-07-14 10:44:27.327877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.397 [2024-07-14 10:44:27.327884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.397 [2024-07-14 10:44:27.328056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.397 [2024-07-14 10:44:27.328234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.397 [2024-07-14 10:44:27.328244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.397 [2024-07-14 10:44:27.328250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.397 [2024-07-14 10:44:27.330870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.397 [2024-07-14 10:44:27.340568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.397 [2024-07-14 10:44:27.340866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-07-14 10:44:27.340882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.397 [2024-07-14 10:44:27.340889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.397 [2024-07-14 10:44:27.341066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.397 [2024-07-14 10:44:27.341251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.397 [2024-07-14 10:44:27.341262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.397 [2024-07-14 10:44:27.341269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.397 [2024-07-14 10:44:27.344067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.397 [2024-07-14 10:44:27.353758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.397 [2024-07-14 10:44:27.354174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-07-14 10:44:27.354191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.397 [2024-07-14 10:44:27.354198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.397 [2024-07-14 10:44:27.354381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.397 [2024-07-14 10:44:27.354559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.397 [2024-07-14 10:44:27.354568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.397 [2024-07-14 10:44:27.354575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.397 [2024-07-14 10:44:27.357423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.397 [2024-07-14 10:44:27.366954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.397 [2024-07-14 10:44:27.367314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.397 [2024-07-14 10:44:27.367331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.397 [2024-07-14 10:44:27.367339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.397 [2024-07-14 10:44:27.367515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.397 [2024-07-14 10:44:27.367693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.397 [2024-07-14 10:44:27.367702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.397 [2024-07-14 10:44:27.367709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.397 [2024-07-14 10:44:27.370547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.658 [2024-07-14 10:44:27.380069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.658 [2024-07-14 10:44:27.380405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.658 [2024-07-14 10:44:27.380423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.658 [2024-07-14 10:44:27.380434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.658 [2024-07-14 10:44:27.380611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.658 [2024-07-14 10:44:27.380790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.658 [2024-07-14 10:44:27.380800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.658 [2024-07-14 10:44:27.380806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.658 [2024-07-14 10:44:27.383637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.658 [2024-07-14 10:44:27.393185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.658 [2024-07-14 10:44:27.393610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.658 [2024-07-14 10:44:27.393627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.658 [2024-07-14 10:44:27.393634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.658 [2024-07-14 10:44:27.393812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.658 [2024-07-14 10:44:27.393990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.658 [2024-07-14 10:44:27.393999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.658 [2024-07-14 10:44:27.394006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.658 [2024-07-14 10:44:27.396837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.658 [2024-07-14 10:44:27.406360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.658 [2024-07-14 10:44:27.406798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.658 [2024-07-14 10:44:27.406815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.658 [2024-07-14 10:44:27.406823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.658 [2024-07-14 10:44:27.407000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.658 [2024-07-14 10:44:27.407177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.658 [2024-07-14 10:44:27.407187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.658 [2024-07-14 10:44:27.407193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.658 [2024-07-14 10:44:27.410029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.658 [2024-07-14 10:44:27.419556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.658 [2024-07-14 10:44:27.419998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.658 [2024-07-14 10:44:27.420015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.658 [2024-07-14 10:44:27.420022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.658 [2024-07-14 10:44:27.420200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.658 [2024-07-14 10:44:27.420384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.658 [2024-07-14 10:44:27.420397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.658 [2024-07-14 10:44:27.420404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.658 [2024-07-14 10:44:27.423236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.658 [2024-07-14 10:44:27.432743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.658 [2024-07-14 10:44:27.433181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.658 [2024-07-14 10:44:27.433197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.658 [2024-07-14 10:44:27.433205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.658 [2024-07-14 10:44:27.433387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.658 [2024-07-14 10:44:27.433566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.658 [2024-07-14 10:44:27.433575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.658 [2024-07-14 10:44:27.433582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.658 [2024-07-14 10:44:27.436411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.658 [2024-07-14 10:44:27.445934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.658 [2024-07-14 10:44:27.446369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.658 [2024-07-14 10:44:27.446387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.658 [2024-07-14 10:44:27.446394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.658 [2024-07-14 10:44:27.446572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.658 [2024-07-14 10:44:27.446750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.658 [2024-07-14 10:44:27.446759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.658 [2024-07-14 10:44:27.446766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.658 [2024-07-14 10:44:27.449598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.658 [2024-07-14 10:44:27.459108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.658 [2024-07-14 10:44:27.459548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.658 [2024-07-14 10:44:27.459565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.658 [2024-07-14 10:44:27.459572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.658 [2024-07-14 10:44:27.459749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.659 [2024-07-14 10:44:27.459928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.659 [2024-07-14 10:44:27.459938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.659 [2024-07-14 10:44:27.459944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.659 [2024-07-14 10:44:27.462775] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.659 [2024-07-14 10:44:27.472299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.659 [2024-07-14 10:44:27.472735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.659 [2024-07-14 10:44:27.472752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.659 [2024-07-14 10:44:27.472760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.659 [2024-07-14 10:44:27.472937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.659 [2024-07-14 10:44:27.473116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.659 [2024-07-14 10:44:27.473125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.659 [2024-07-14 10:44:27.473131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.659 [2024-07-14 10:44:27.475963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.659 [2024-07-14 10:44:27.485478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.659 [2024-07-14 10:44:27.485915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.659 [2024-07-14 10:44:27.485932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.659 [2024-07-14 10:44:27.485939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.659 [2024-07-14 10:44:27.486116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.659 [2024-07-14 10:44:27.486300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.659 [2024-07-14 10:44:27.486310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.659 [2024-07-14 10:44:27.486316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.659 [2024-07-14 10:44:27.489187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.659 [2024-07-14 10:44:27.498557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.659 [2024-07-14 10:44:27.498971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.659 [2024-07-14 10:44:27.498989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.659 [2024-07-14 10:44:27.498996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.659 [2024-07-14 10:44:27.499173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.659 [2024-07-14 10:44:27.499358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.659 [2024-07-14 10:44:27.499368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.659 [2024-07-14 10:44:27.499374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.659 [2024-07-14 10:44:27.502203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.659 [2024-07-14 10:44:27.511656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.659 [2024-07-14 10:44:27.512090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.659 [2024-07-14 10:44:27.512107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.659 [2024-07-14 10:44:27.512114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.659 [2024-07-14 10:44:27.512299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.659 [2024-07-14 10:44:27.512478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.659 [2024-07-14 10:44:27.512487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.659 [2024-07-14 10:44:27.512493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.659 [2024-07-14 10:44:27.515325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.659 [2024-07-14 10:44:27.524844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.659 [2024-07-14 10:44:27.525277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.659 [2024-07-14 10:44:27.525295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.659 [2024-07-14 10:44:27.525302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.659 [2024-07-14 10:44:27.525478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.659 [2024-07-14 10:44:27.525656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.659 [2024-07-14 10:44:27.525666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.659 [2024-07-14 10:44:27.525674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.659 [2024-07-14 10:44:27.528503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.659 [2024-07-14 10:44:27.538017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.659 [2024-07-14 10:44:27.538461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.659 [2024-07-14 10:44:27.538478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.659 [2024-07-14 10:44:27.538486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.659 [2024-07-14 10:44:27.538663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.659 [2024-07-14 10:44:27.538841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.659 [2024-07-14 10:44:27.538851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.659 [2024-07-14 10:44:27.538857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.659 [2024-07-14 10:44:27.541690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.659 [2024-07-14 10:44:27.551209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.659 [2024-07-14 10:44:27.551582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.659 [2024-07-14 10:44:27.551599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.659 [2024-07-14 10:44:27.551606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.659 [2024-07-14 10:44:27.551783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.659 [2024-07-14 10:44:27.551962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.659 [2024-07-14 10:44:27.551972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.659 [2024-07-14 10:44:27.551982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.659 [2024-07-14 10:44:27.554815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.659 [2024-07-14 10:44:27.564336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.659 [2024-07-14 10:44:27.564779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.659 [2024-07-14 10:44:27.564796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.659 [2024-07-14 10:44:27.564803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.659 [2024-07-14 10:44:27.564980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.659 [2024-07-14 10:44:27.565158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.659 [2024-07-14 10:44:27.565168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.659 [2024-07-14 10:44:27.565174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.659 [2024-07-14 10:44:27.568002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.659 [2024-07-14 10:44:27.577520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.659 [2024-07-14 10:44:27.577960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.659 [2024-07-14 10:44:27.577977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.659 [2024-07-14 10:44:27.577984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.659 [2024-07-14 10:44:27.578160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.659 [2024-07-14 10:44:27.578344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.659 [2024-07-14 10:44:27.578354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.659 [2024-07-14 10:44:27.578361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.659 [2024-07-14 10:44:27.581187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.659 [2024-07-14 10:44:27.590665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.659 [2024-07-14 10:44:27.591110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.659 [2024-07-14 10:44:27.591128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.659 [2024-07-14 10:44:27.591135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.659 [2024-07-14 10:44:27.591324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.659 [2024-07-14 10:44:27.591517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.659 [2024-07-14 10:44:27.591527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.659 [2024-07-14 10:44:27.591533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.659 [2024-07-14 10:44:27.594359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.659 [2024-07-14 10:44:27.603704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.659 [2024-07-14 10:44:27.604052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.659 [2024-07-14 10:44:27.604068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.659 [2024-07-14 10:44:27.604076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.659 [2024-07-14 10:44:27.604259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.660 [2024-07-14 10:44:27.604437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.660 [2024-07-14 10:44:27.604446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.660 [2024-07-14 10:44:27.604452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.660 [2024-07-14 10:44:27.607276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.660 [2024-07-14 10:44:27.616793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.660 [2024-07-14 10:44:27.617201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.660 [2024-07-14 10:44:27.617218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.660 [2024-07-14 10:44:27.617231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.660 [2024-07-14 10:44:27.617409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.660 [2024-07-14 10:44:27.617586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.660 [2024-07-14 10:44:27.617596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.660 [2024-07-14 10:44:27.617603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.660 [2024-07-14 10:44:27.620427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.660 [2024-07-14 10:44:27.630032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.660 [2024-07-14 10:44:27.630393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.660 [2024-07-14 10:44:27.630410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.660 [2024-07-14 10:44:27.630429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.660 [2024-07-14 10:44:27.630607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.660 [2024-07-14 10:44:27.630786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.660 [2024-07-14 10:44:27.630795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.660 [2024-07-14 10:44:27.630803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.660 [2024-07-14 10:44:27.633712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.921 [2024-07-14 10:44:27.643212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.921 [2024-07-14 10:44:27.643654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.921 [2024-07-14 10:44:27.643670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.921 [2024-07-14 10:44:27.643678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.921 [2024-07-14 10:44:27.643855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.921 [2024-07-14 10:44:27.644038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.921 [2024-07-14 10:44:27.644048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.921 [2024-07-14 10:44:27.644055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.921 [2024-07-14 10:44:27.646880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.921 [2024-07-14 10:44:27.656392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.921 [2024-07-14 10:44:27.656804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.921 [2024-07-14 10:44:27.656822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.921 [2024-07-14 10:44:27.656829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.921 [2024-07-14 10:44:27.657005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.921 [2024-07-14 10:44:27.657183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.921 [2024-07-14 10:44:27.657193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.921 [2024-07-14 10:44:27.657199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.921 [2024-07-14 10:44:27.660039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.921 [2024-07-14 10:44:27.669567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.921 [2024-07-14 10:44:27.669979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.921 [2024-07-14 10:44:27.669996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.921 [2024-07-14 10:44:27.670003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.921 [2024-07-14 10:44:27.670180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.921 [2024-07-14 10:44:27.670365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.921 [2024-07-14 10:44:27.670375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.921 [2024-07-14 10:44:27.670381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.921 [2024-07-14 10:44:27.673211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.921 [2024-07-14 10:44:27.682743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.921 [2024-07-14 10:44:27.683176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.921 [2024-07-14 10:44:27.683194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.921 [2024-07-14 10:44:27.683201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.921 [2024-07-14 10:44:27.683386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.921 [2024-07-14 10:44:27.683564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.921 [2024-07-14 10:44:27.683574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.921 [2024-07-14 10:44:27.683580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.921 [2024-07-14 10:44:27.686415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.921 [2024-07-14 10:44:27.695936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.921 [2024-07-14 10:44:27.696369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.921 [2024-07-14 10:44:27.696387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.921 [2024-07-14 10:44:27.696394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.921 [2024-07-14 10:44:27.696571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.921 [2024-07-14 10:44:27.696749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.921 [2024-07-14 10:44:27.696759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.921 [2024-07-14 10:44:27.696765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.921 [2024-07-14 10:44:27.699595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.921 [2024-07-14 10:44:27.709133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.921 [2024-07-14 10:44:27.709589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.921 [2024-07-14 10:44:27.709606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.921 [2024-07-14 10:44:27.709613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.921 [2024-07-14 10:44:27.709791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.921 [2024-07-14 10:44:27.709968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.921 [2024-07-14 10:44:27.709978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.921 [2024-07-14 10:44:27.709984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.921 [2024-07-14 10:44:27.712848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.921 [2024-07-14 10:44:27.722322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.921 [2024-07-14 10:44:27.722757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.921 [2024-07-14 10:44:27.722775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.921 [2024-07-14 10:44:27.722782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.921 [2024-07-14 10:44:27.722959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.921 [2024-07-14 10:44:27.723138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.921 [2024-07-14 10:44:27.723147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.921 [2024-07-14 10:44:27.723154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.921 [2024-07-14 10:44:27.725989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.921 [2024-07-14 10:44:27.735501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.922 [2024-07-14 10:44:27.735921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.922 [2024-07-14 10:44:27.735939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.922 [2024-07-14 10:44:27.735949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.922 [2024-07-14 10:44:27.736125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.922 [2024-07-14 10:44:27.736309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.922 [2024-07-14 10:44:27.736320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.922 [2024-07-14 10:44:27.736327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.922 [2024-07-14 10:44:27.739152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.922 [2024-07-14 10:44:27.748674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.922 [2024-07-14 10:44:27.749104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.922 [2024-07-14 10:44:27.749122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.922 [2024-07-14 10:44:27.749129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.922 [2024-07-14 10:44:27.749313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.922 [2024-07-14 10:44:27.749492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.922 [2024-07-14 10:44:27.749502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.922 [2024-07-14 10:44:27.749509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.922 [2024-07-14 10:44:27.752333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.922 [2024-07-14 10:44:27.761729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.922 [2024-07-14 10:44:27.762171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.922 [2024-07-14 10:44:27.762212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.922 [2024-07-14 10:44:27.762251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.922 [2024-07-14 10:44:27.762650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.922 [2024-07-14 10:44:27.762825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.922 [2024-07-14 10:44:27.762835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.922 [2024-07-14 10:44:27.762841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.922 [2024-07-14 10:44:27.765589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.922 [2024-07-14 10:44:27.774665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.922 [2024-07-14 10:44:27.775089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.922 [2024-07-14 10:44:27.775106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.922 [2024-07-14 10:44:27.775113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.922 [2024-07-14 10:44:27.775283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.922 [2024-07-14 10:44:27.775450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.922 [2024-07-14 10:44:27.775459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.922 [2024-07-14 10:44:27.775465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.922 [2024-07-14 10:44:27.778118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.922 [2024-07-14 10:44:27.787736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.922 [2024-07-14 10:44:27.788160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.922 [2024-07-14 10:44:27.788177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.922 [2024-07-14 10:44:27.788184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.922 [2024-07-14 10:44:27.788353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.922 [2024-07-14 10:44:27.788516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.922 [2024-07-14 10:44:27.788525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.922 [2024-07-14 10:44:27.788531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.922 [2024-07-14 10:44:27.791188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.922 [2024-07-14 10:44:27.800781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.922 [2024-07-14 10:44:27.801198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.922 [2024-07-14 10:44:27.801252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.922 [2024-07-14 10:44:27.801275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.922 [2024-07-14 10:44:27.801772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.922 [2024-07-14 10:44:27.801937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.922 [2024-07-14 10:44:27.801946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.922 [2024-07-14 10:44:27.801952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.922 [2024-07-14 10:44:27.804546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.922 [2024-07-14 10:44:27.813631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.922 [2024-07-14 10:44:27.814085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.922 [2024-07-14 10:44:27.814128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.922 [2024-07-14 10:44:27.814149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.922 [2024-07-14 10:44:27.814644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.922 [2024-07-14 10:44:27.814818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.922 [2024-07-14 10:44:27.814827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.922 [2024-07-14 10:44:27.814834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.922 [2024-07-14 10:44:27.817474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.922 [2024-07-14 10:44:27.826582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.922 [2024-07-14 10:44:27.826943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.922 [2024-07-14 10:44:27.826984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.922 [2024-07-14 10:44:27.827006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.922 [2024-07-14 10:44:27.827471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.922 [2024-07-14 10:44:27.827649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.922 [2024-07-14 10:44:27.827659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.922 [2024-07-14 10:44:27.827667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.922 [2024-07-14 10:44:27.830285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.922 [2024-07-14 10:44:27.839604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.922 [2024-07-14 10:44:27.840076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.922 [2024-07-14 10:44:27.840118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.922 [2024-07-14 10:44:27.840139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.922 [2024-07-14 10:44:27.840690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.922 [2024-07-14 10:44:27.840855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.922 [2024-07-14 10:44:27.840865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.922 [2024-07-14 10:44:27.840871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.922 [2024-07-14 10:44:27.843740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.922 [2024-07-14 10:44:27.852668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.922 [2024-07-14 10:44:27.853109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.922 [2024-07-14 10:44:27.853126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.922 [2024-07-14 10:44:27.853134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.922 [2024-07-14 10:44:27.853312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.922 [2024-07-14 10:44:27.853496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.922 [2024-07-14 10:44:27.853505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.922 [2024-07-14 10:44:27.853512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.922 [2024-07-14 10:44:27.856236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.922 [2024-07-14 10:44:27.865630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.922 [2024-07-14 10:44:27.866055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.922 [2024-07-14 10:44:27.866092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.922 [2024-07-14 10:44:27.866124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.922 [2024-07-14 10:44:27.866651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.922 [2024-07-14 10:44:27.866816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.922 [2024-07-14 10:44:27.866825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.922 [2024-07-14 10:44:27.866831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.922 [2024-07-14 10:44:27.869525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.923 [2024-07-14 10:44:27.878579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.923 [2024-07-14 10:44:27.879040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.923 [2024-07-14 10:44:27.879082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.923 [2024-07-14 10:44:27.879104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.923 [2024-07-14 10:44:27.879622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.923 [2024-07-14 10:44:27.879797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.923 [2024-07-14 10:44:27.879806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.923 [2024-07-14 10:44:27.879813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.923 [2024-07-14 10:44:27.882451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.923 [2024-07-14 10:44:27.891424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.923 [2024-07-14 10:44:27.891827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.923 [2024-07-14 10:44:27.891842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:42.923 [2024-07-14 10:44:27.891851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:42.923 [2024-07-14 10:44:27.892014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:42.923 [2024-07-14 10:44:27.892177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.923 [2024-07-14 10:44:27.892187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.923 [2024-07-14 10:44:27.892192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.923 [2024-07-14 10:44:27.894955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.184 [2024-07-14 10:44:27.904391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.184 [2024-07-14 10:44:27.904790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.184 [2024-07-14 10:44:27.904807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.184 [2024-07-14 10:44:27.904815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.184 [2024-07-14 10:44:27.904988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.184 [2024-07-14 10:44:27.905160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.184 [2024-07-14 10:44:27.905173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.184 [2024-07-14 10:44:27.905180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.184 [2024-07-14 10:44:27.907914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.184 [2024-07-14 10:44:27.917185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.184 [2024-07-14 10:44:27.917609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.184 [2024-07-14 10:44:27.917652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.184 [2024-07-14 10:44:27.917675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.184 [2024-07-14 10:44:27.918265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.184 [2024-07-14 10:44:27.918846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.184 [2024-07-14 10:44:27.918871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.184 [2024-07-14 10:44:27.918892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.184 [2024-07-14 10:44:27.921565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.184 [2024-07-14 10:44:27.930044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.184 [2024-07-14 10:44:27.930457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.184 [2024-07-14 10:44:27.930501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.184 [2024-07-14 10:44:27.930522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.184 [2024-07-14 10:44:27.931032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.184 [2024-07-14 10:44:27.931196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.184 [2024-07-14 10:44:27.931204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.184 [2024-07-14 10:44:27.931209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.184 [2024-07-14 10:44:27.933896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.184 [2024-07-14 10:44:27.942940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.184 [2024-07-14 10:44:27.943363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.184 [2024-07-14 10:44:27.943379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.184 [2024-07-14 10:44:27.943387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.184 [2024-07-14 10:44:27.943572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.184 [2024-07-14 10:44:27.943745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.184 [2024-07-14 10:44:27.943755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.184 [2024-07-14 10:44:27.943763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.184 [2024-07-14 10:44:27.946467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.184 [2024-07-14 10:44:27.955796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.184 [2024-07-14 10:44:27.956152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.184 [2024-07-14 10:44:27.956168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.184 [2024-07-14 10:44:27.956174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.184 [2024-07-14 10:44:27.956343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.184 [2024-07-14 10:44:27.956506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.184 [2024-07-14 10:44:27.956515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.184 [2024-07-14 10:44:27.956521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.184 [2024-07-14 10:44:27.959217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.184 [2024-07-14 10:44:27.968720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.184 [2024-07-14 10:44:27.969141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.184 [2024-07-14 10:44:27.969183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.184 [2024-07-14 10:44:27.969203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.184 [2024-07-14 10:44:27.969739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.184 [2024-07-14 10:44:27.969914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.184 [2024-07-14 10:44:27.969924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.184 [2024-07-14 10:44:27.969930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.184 [2024-07-14 10:44:27.972562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.184 [2024-07-14 10:44:27.981525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.184 [2024-07-14 10:44:27.981957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.184 [2024-07-14 10:44:27.982000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.184 [2024-07-14 10:44:27.982023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.184 [2024-07-14 10:44:27.982614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.184 [2024-07-14 10:44:27.983099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.184 [2024-07-14 10:44:27.983108] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.184 [2024-07-14 10:44:27.983115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.184 [2024-07-14 10:44:27.985736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.184 [2024-07-14 10:44:27.994333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.184 [2024-07-14 10:44:27.994750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.184 [2024-07-14 10:44:27.994800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.184 [2024-07-14 10:44:27.994822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.184 [2024-07-14 10:44:27.995430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.184 [2024-07-14 10:44:27.996012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.184 [2024-07-14 10:44:27.996037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.184 [2024-07-14 10:44:27.996058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.184 [2024-07-14 10:44:27.998686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.184 [2024-07-14 10:44:28.007120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.184 [2024-07-14 10:44:28.007525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.185 [2024-07-14 10:44:28.007541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.185 [2024-07-14 10:44:28.007548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.185 [2024-07-14 10:44:28.007710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.185 [2024-07-14 10:44:28.007873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.185 [2024-07-14 10:44:28.007882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.185 [2024-07-14 10:44:28.007888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.185 [2024-07-14 10:44:28.010580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.185 [2024-07-14 10:44:28.019943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.185 [2024-07-14 10:44:28.020363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.185 [2024-07-14 10:44:28.020406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.185 [2024-07-14 10:44:28.020428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.185 [2024-07-14 10:44:28.020983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.185 [2024-07-14 10:44:28.021148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.185 [2024-07-14 10:44:28.021157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.185 [2024-07-14 10:44:28.021163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.185 [2024-07-14 10:44:28.023872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.185 [2024-07-14 10:44:28.032832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.185 [2024-07-14 10:44:28.033269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.185 [2024-07-14 10:44:28.033311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.185 [2024-07-14 10:44:28.033333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.185 [2024-07-14 10:44:28.033911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.185 [2024-07-14 10:44:28.034152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.185 [2024-07-14 10:44:28.034161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.185 [2024-07-14 10:44:28.034170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.185 [2024-07-14 10:44:28.036855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.185 [2024-07-14 10:44:28.045721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.185 [2024-07-14 10:44:28.046142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.185 [2024-07-14 10:44:28.046158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.185 [2024-07-14 10:44:28.046166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.185 [2024-07-14 10:44:28.046354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.185 [2024-07-14 10:44:28.046529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.185 [2024-07-14 10:44:28.046539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.185 [2024-07-14 10:44:28.046545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.185 [2024-07-14 10:44:28.049201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.185 [2024-07-14 10:44:28.058628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.185 [2024-07-14 10:44:28.059032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.185 [2024-07-14 10:44:28.059074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.185 [2024-07-14 10:44:28.059096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.185 [2024-07-14 10:44:28.059689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.185 [2024-07-14 10:44:28.060281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.185 [2024-07-14 10:44:28.060306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.185 [2024-07-14 10:44:28.060313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.185 [2024-07-14 10:44:28.062919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.185 [2024-07-14 10:44:28.071679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.185 [2024-07-14 10:44:28.072118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.185 [2024-07-14 10:44:28.072134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.185 [2024-07-14 10:44:28.072141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.185 [2024-07-14 10:44:28.072338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.185 [2024-07-14 10:44:28.072518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.185 [2024-07-14 10:44:28.072528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.185 [2024-07-14 10:44:28.072534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.185 [2024-07-14 10:44:28.075236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.185 [2024-07-14 10:44:28.084496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.185 [2024-07-14 10:44:28.084918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.185 [2024-07-14 10:44:28.084979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.185 [2024-07-14 10:44:28.085001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.185 [2024-07-14 10:44:28.085515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.185 [2024-07-14 10:44:28.085690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.185 [2024-07-14 10:44:28.085699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.185 [2024-07-14 10:44:28.085706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.185 [2024-07-14 10:44:28.088354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.185 [2024-07-14 10:44:28.097323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.185 [2024-07-14 10:44:28.097749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.185 [2024-07-14 10:44:28.097766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.185 [2024-07-14 10:44:28.097774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.185 [2024-07-14 10:44:28.097937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.185 [2024-07-14 10:44:28.098100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.185 [2024-07-14 10:44:28.098110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.185 [2024-07-14 10:44:28.098117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.185 [2024-07-14 10:44:28.100953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.185 [2024-07-14 10:44:28.110333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.185 [2024-07-14 10:44:28.110762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.185 [2024-07-14 10:44:28.110778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.185 [2024-07-14 10:44:28.110785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.185 [2024-07-14 10:44:28.110957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.185 [2024-07-14 10:44:28.111131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.185 [2024-07-14 10:44:28.111140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.185 [2024-07-14 10:44:28.111147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.185 [2024-07-14 10:44:28.113863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.185 [2024-07-14 10:44:28.123343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.185 [2024-07-14 10:44:28.123700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.185 [2024-07-14 10:44:28.123742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.185 [2024-07-14 10:44:28.123764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.185 [2024-07-14 10:44:28.124290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.185 [2024-07-14 10:44:28.124468] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.185 [2024-07-14 10:44:28.124476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.185 [2024-07-14 10:44:28.124482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.185 [2024-07-14 10:44:28.127134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.185 [2024-07-14 10:44:28.136251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.185 [2024-07-14 10:44:28.136671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.185 [2024-07-14 10:44:28.136688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.185 [2024-07-14 10:44:28.136695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.185 [2024-07-14 10:44:28.136857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.185 [2024-07-14 10:44:28.137020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.185 [2024-07-14 10:44:28.137029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.185 [2024-07-14 10:44:28.137035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.185 [2024-07-14 10:44:28.139666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.186 [2024-07-14 10:44:28.149078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.186 [2024-07-14 10:44:28.149450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.186 [2024-07-14 10:44:28.149492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.186 [2024-07-14 10:44:28.149514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.186 [2024-07-14 10:44:28.150087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.186 [2024-07-14 10:44:28.150485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.186 [2024-07-14 10:44:28.150504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.186 [2024-07-14 10:44:28.150517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.186 [2024-07-14 10:44:28.156748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.447 [2024-07-14 10:44:28.163954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.447 [2024-07-14 10:44:28.164453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.447 [2024-07-14 10:44:28.164474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.447 [2024-07-14 10:44:28.164484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.447 [2024-07-14 10:44:28.164738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.447 [2024-07-14 10:44:28.164993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.447 [2024-07-14 10:44:28.165006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.447 [2024-07-14 10:44:28.165016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.447 [2024-07-14 10:44:28.169089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.447 [2024-07-14 10:44:28.177015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.447 [2024-07-14 10:44:28.177452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.447 [2024-07-14 10:44:28.177496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.447 [2024-07-14 10:44:28.177518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.447 [2024-07-14 10:44:28.178097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.447 [2024-07-14 10:44:28.178693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.447 [2024-07-14 10:44:28.178719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.447 [2024-07-14 10:44:28.178741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.447 [2024-07-14 10:44:28.181483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.447 [2024-07-14 10:44:28.189836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.447 [2024-07-14 10:44:28.190268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.447 [2024-07-14 10:44:28.190310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.447 [2024-07-14 10:44:28.190332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.447 [2024-07-14 10:44:28.190784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.448 [2024-07-14 10:44:28.190948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.448 [2024-07-14 10:44:28.190957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.448 [2024-07-14 10:44:28.190964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.448 [2024-07-14 10:44:28.193653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.448 [2024-07-14 10:44:28.202705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.448 [2024-07-14 10:44:28.203055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.448 [2024-07-14 10:44:28.203071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.448 [2024-07-14 10:44:28.203077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.448 [2024-07-14 10:44:28.203246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.448 [2024-07-14 10:44:28.203433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.448 [2024-07-14 10:44:28.203443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.448 [2024-07-14 10:44:28.203450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.448 [2024-07-14 10:44:28.206109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.448 [2024-07-14 10:44:28.215528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.448 [2024-07-14 10:44:28.215951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.448 [2024-07-14 10:44:28.215968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.448 [2024-07-14 10:44:28.215977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.448 [2024-07-14 10:44:28.216140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.448 [2024-07-14 10:44:28.216328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.448 [2024-07-14 10:44:28.216339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.448 [2024-07-14 10:44:28.216351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.448 [2024-07-14 10:44:28.219017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.448 [2024-07-14 10:44:28.228381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.448 [2024-07-14 10:44:28.228805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.448 [2024-07-14 10:44:28.228821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.448 [2024-07-14 10:44:28.228828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.448 [2024-07-14 10:44:28.228990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.448 [2024-07-14 10:44:28.229152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.448 [2024-07-14 10:44:28.229161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.448 [2024-07-14 10:44:28.229168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.448 [2024-07-14 10:44:28.231855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.448 [2024-07-14 10:44:28.241279] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.448 [2024-07-14 10:44:28.241622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.448 [2024-07-14 10:44:28.241665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.448 [2024-07-14 10:44:28.241687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.448 [2024-07-14 10:44:28.242278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.448 [2024-07-14 10:44:28.242786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.448 [2024-07-14 10:44:28.242795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.448 [2024-07-14 10:44:28.242801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.448 [2024-07-14 10:44:28.245434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.448 [2024-07-14 10:44:28.254190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.448 [2024-07-14 10:44:28.254634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.448 [2024-07-14 10:44:28.254679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.448 [2024-07-14 10:44:28.254702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.448 [2024-07-14 10:44:28.255296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.448 [2024-07-14 10:44:28.255741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.448 [2024-07-14 10:44:28.255753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.448 [2024-07-14 10:44:28.255759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.448 [2024-07-14 10:44:28.258347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.448 [2024-07-14 10:44:28.267121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.448 [2024-07-14 10:44:28.267549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.448 [2024-07-14 10:44:28.267606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.448 [2024-07-14 10:44:28.267628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.448 [2024-07-14 10:44:28.268207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.448 [2024-07-14 10:44:28.268751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.448 [2024-07-14 10:44:28.268761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.448 [2024-07-14 10:44:28.268767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.448 [2024-07-14 10:44:28.271412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.448 [2024-07-14 10:44:28.279923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.448 [2024-07-14 10:44:28.280354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.448 [2024-07-14 10:44:28.280397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.448 [2024-07-14 10:44:28.280419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.448 [2024-07-14 10:44:28.280914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.448 [2024-07-14 10:44:28.281079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.448 [2024-07-14 10:44:28.281088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.448 [2024-07-14 10:44:28.281095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.448 [2024-07-14 10:44:28.283783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.448 [2024-07-14 10:44:28.292802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.448 [2024-07-14 10:44:28.293220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.448 [2024-07-14 10:44:28.293241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.448 [2024-07-14 10:44:28.293248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.448 [2024-07-14 10:44:28.293410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.448 [2024-07-14 10:44:28.293573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.448 [2024-07-14 10:44:28.293582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.448 [2024-07-14 10:44:28.293587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.448 [2024-07-14 10:44:28.296272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.448 [2024-07-14 10:44:28.305697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.448 [2024-07-14 10:44:28.306111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.448 [2024-07-14 10:44:28.306128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.448 [2024-07-14 10:44:28.306134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.448 [2024-07-14 10:44:28.306323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.448 [2024-07-14 10:44:28.306497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.448 [2024-07-14 10:44:28.306507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.448 [2024-07-14 10:44:28.306513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.448 [2024-07-14 10:44:28.309168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.448 [2024-07-14 10:44:28.318500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.448 [2024-07-14 10:44:28.318921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.448 [2024-07-14 10:44:28.318964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.448 [2024-07-14 10:44:28.318986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.448 [2024-07-14 10:44:28.319577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.448 [2024-07-14 10:44:28.320109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.448 [2024-07-14 10:44:28.320118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.448 [2024-07-14 10:44:28.320124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.448 [2024-07-14 10:44:28.322854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.448 [2024-07-14 10:44:28.331403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.448 [2024-07-14 10:44:28.331820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.448 [2024-07-14 10:44:28.331837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.449 [2024-07-14 10:44:28.331844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.449 [2024-07-14 10:44:28.332006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.449 [2024-07-14 10:44:28.332169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.449 [2024-07-14 10:44:28.332178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.449 [2024-07-14 10:44:28.332184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.449 [2024-07-14 10:44:28.334871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.449 [2024-07-14 10:44:28.344193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.449 [2024-07-14 10:44:28.344619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.449 [2024-07-14 10:44:28.344635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.449 [2024-07-14 10:44:28.344645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.449 [2024-07-14 10:44:28.344808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.449 [2024-07-14 10:44:28.344970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.449 [2024-07-14 10:44:28.344979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.449 [2024-07-14 10:44:28.344985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.449 [2024-07-14 10:44:28.347580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.449 [2024-07-14 10:44:28.357340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.449 [2024-07-14 10:44:28.357791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.449 [2024-07-14 10:44:28.357832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.449 [2024-07-14 10:44:28.357854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.449 [2024-07-14 10:44:28.358445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.449 [2024-07-14 10:44:28.358986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.449 [2024-07-14 10:44:28.358996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.449 [2024-07-14 10:44:28.359003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.449 [2024-07-14 10:44:28.361712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.449 [2024-07-14 10:44:28.370283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.449 [2024-07-14 10:44:28.370619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.449 [2024-07-14 10:44:28.370662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.449 [2024-07-14 10:44:28.370683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.449 [2024-07-14 10:44:28.371148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.449 [2024-07-14 10:44:28.371317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.449 [2024-07-14 10:44:28.371326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.449 [2024-07-14 10:44:28.371333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.449 [2024-07-14 10:44:28.373923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.449 [2024-07-14 10:44:28.383075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.449 [2024-07-14 10:44:28.383521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.449 [2024-07-14 10:44:28.383563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.449 [2024-07-14 10:44:28.383585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.449 [2024-07-14 10:44:28.384132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.449 [2024-07-14 10:44:28.384321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.449 [2024-07-14 10:44:28.384333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.449 [2024-07-14 10:44:28.384339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.449 [2024-07-14 10:44:28.387071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.449 [2024-07-14 10:44:28.395878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.449 [2024-07-14 10:44:28.396294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.449 [2024-07-14 10:44:28.396310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.449 [2024-07-14 10:44:28.396317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.449 [2024-07-14 10:44:28.396479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.449 [2024-07-14 10:44:28.396643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.449 [2024-07-14 10:44:28.396652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.449 [2024-07-14 10:44:28.396658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.449 [2024-07-14 10:44:28.399345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.449 [2024-07-14 10:44:28.408749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.449 [2024-07-14 10:44:28.409184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.449 [2024-07-14 10:44:28.409240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.449 [2024-07-14 10:44:28.409264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.449 [2024-07-14 10:44:28.409843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.449 [2024-07-14 10:44:28.410049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.449 [2024-07-14 10:44:28.410058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.449 [2024-07-14 10:44:28.410064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.449 [2024-07-14 10:44:28.412692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.449 [2024-07-14 10:44:28.421695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.449 [2024-07-14 10:44:28.422096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.449 [2024-07-14 10:44:28.422112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.449 [2024-07-14 10:44:28.422120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.449 [2024-07-14 10:44:28.422317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.449 [2024-07-14 10:44:28.422495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.449 [2024-07-14 10:44:28.422505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.449 [2024-07-14 10:44:28.422511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.449 [2024-07-14 10:44:28.425293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.710 [2024-07-14 10:44:28.434734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.710 [2024-07-14 10:44:28.435155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.710 [2024-07-14 10:44:28.435194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.710 [2024-07-14 10:44:28.435217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.710 [2024-07-14 10:44:28.435812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.710 [2024-07-14 10:44:28.436054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.710 [2024-07-14 10:44:28.436064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.710 [2024-07-14 10:44:28.436070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.710 [2024-07-14 10:44:28.438692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.710 [2024-07-14 10:44:28.447602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.710 [2024-07-14 10:44:28.447965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.710 [2024-07-14 10:44:28.447981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.710 [2024-07-14 10:44:28.447988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.710 [2024-07-14 10:44:28.448150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.710 [2024-07-14 10:44:28.448340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.710 [2024-07-14 10:44:28.448351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.710 [2024-07-14 10:44:28.448357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.710 [2024-07-14 10:44:28.451017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.710 [2024-07-14 10:44:28.460447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.710 [2024-07-14 10:44:28.460882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.710 [2024-07-14 10:44:28.460924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.710 [2024-07-14 10:44:28.460945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.710 [2024-07-14 10:44:28.461535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.710 [2024-07-14 10:44:28.462117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.710 [2024-07-14 10:44:28.462141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.710 [2024-07-14 10:44:28.462161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.710 [2024-07-14 10:44:28.464802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.710 [2024-07-14 10:44:28.473320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.710 [2024-07-14 10:44:28.473683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.710 [2024-07-14 10:44:28.473699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.710 [2024-07-14 10:44:28.473705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.710 [2024-07-14 10:44:28.473871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.710 [2024-07-14 10:44:28.474034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.710 [2024-07-14 10:44:28.474043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.710 [2024-07-14 10:44:28.474049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.710 [2024-07-14 10:44:28.476739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.710 [2024-07-14 10:44:28.486125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.710 [2024-07-14 10:44:28.486558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.710 [2024-07-14 10:44:28.486601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.710 [2024-07-14 10:44:28.486622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.710 [2024-07-14 10:44:28.487200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.710 [2024-07-14 10:44:28.487424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.710 [2024-07-14 10:44:28.487434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.710 [2024-07-14 10:44:28.487440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.710 [2024-07-14 10:44:28.490099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.710 [2024-07-14 10:44:28.498907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.710 [2024-07-14 10:44:28.499253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.710 [2024-07-14 10:44:28.499269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.710 [2024-07-14 10:44:28.499276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.710 [2024-07-14 10:44:28.499438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.710 [2024-07-14 10:44:28.499601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.710 [2024-07-14 10:44:28.499610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.710 [2024-07-14 10:44:28.499616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.710 [2024-07-14 10:44:28.502302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.710 [2024-07-14 10:44:28.511719] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.710 [2024-07-14 10:44:28.512149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.710 [2024-07-14 10:44:28.512191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.710 [2024-07-14 10:44:28.512213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.710 [2024-07-14 10:44:28.512805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.710 [2024-07-14 10:44:28.513396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.710 [2024-07-14 10:44:28.513406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.710 [2024-07-14 10:44:28.513416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.710 [2024-07-14 10:44:28.516157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.710 [2024-07-14 10:44:28.524750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.710 [2024-07-14 10:44:28.525183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.710 [2024-07-14 10:44:28.525200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.710 [2024-07-14 10:44:28.525207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.710 [2024-07-14 10:44:28.525387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.710 [2024-07-14 10:44:28.525560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.710 [2024-07-14 10:44:28.525570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.710 [2024-07-14 10:44:28.525576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.710 [2024-07-14 10:44:28.528294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.710 [2024-07-14 10:44:28.537595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.710 [2024-07-14 10:44:28.538015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.710 [2024-07-14 10:44:28.538031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.710 [2024-07-14 10:44:28.538038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.710 [2024-07-14 10:44:28.538200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.710 [2024-07-14 10:44:28.538369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.710 [2024-07-14 10:44:28.538379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.710 [2024-07-14 10:44:28.538385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.710 [2024-07-14 10:44:28.541014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.710 [2024-07-14 10:44:28.550452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.710 [2024-07-14 10:44:28.550888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.710 [2024-07-14 10:44:28.550930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.710 [2024-07-14 10:44:28.550953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.710 [2024-07-14 10:44:28.551454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.710 [2024-07-14 10:44:28.551628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.710 [2024-07-14 10:44:28.551636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.710 [2024-07-14 10:44:28.551643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.710 [2024-07-14 10:44:28.554293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.710 [2024-07-14 10:44:28.563496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.710 [2024-07-14 10:44:28.563926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.710 [2024-07-14 10:44:28.563945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.710 [2024-07-14 10:44:28.563952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.710 [2024-07-14 10:44:28.564124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.710 [2024-07-14 10:44:28.564305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.710 [2024-07-14 10:44:28.564315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.710 [2024-07-14 10:44:28.564322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.710 [2024-07-14 10:44:28.567062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.710 [2024-07-14 10:44:28.576412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.710 [2024-07-14 10:44:28.576819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.710 [2024-07-14 10:44:28.576835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.710 [2024-07-14 10:44:28.576842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.710 [2024-07-14 10:44:28.577014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.710 [2024-07-14 10:44:28.577188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.710 [2024-07-14 10:44:28.577198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.710 [2024-07-14 10:44:28.577204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.710 [2024-07-14 10:44:28.579883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.710 [2024-07-14 10:44:28.589313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.710 [2024-07-14 10:44:28.589708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.710 [2024-07-14 10:44:28.589725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.710 [2024-07-14 10:44:28.589732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.710 [2024-07-14 10:44:28.589894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.710 [2024-07-14 10:44:28.590057] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.710 [2024-07-14 10:44:28.590066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.710 [2024-07-14 10:44:28.590072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.711 [2024-07-14 10:44:28.592744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.711 [2024-07-14 10:44:28.602216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.711 [2024-07-14 10:44:28.602627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.711 [2024-07-14 10:44:28.602670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.711 [2024-07-14 10:44:28.602691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.711 [2024-07-14 10:44:28.603248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.711 [2024-07-14 10:44:28.603441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.711 [2024-07-14 10:44:28.603451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.711 [2024-07-14 10:44:28.603458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.711 [2024-07-14 10:44:28.606307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.711 [2024-07-14 10:44:28.615160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.711 [2024-07-14 10:44:28.615533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.711 [2024-07-14 10:44:28.615549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.711 [2024-07-14 10:44:28.615556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.711 [2024-07-14 10:44:28.615727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.711 [2024-07-14 10:44:28.615899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.711 [2024-07-14 10:44:28.615909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.711 [2024-07-14 10:44:28.615915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.711 [2024-07-14 10:44:28.618554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.711 [2024-07-14 10:44:28.627980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.711 [2024-07-14 10:44:28.628392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.711 [2024-07-14 10:44:28.628408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.711 [2024-07-14 10:44:28.628416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.711 [2024-07-14 10:44:28.628579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.711 [2024-07-14 10:44:28.628742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.711 [2024-07-14 10:44:28.628751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.711 [2024-07-14 10:44:28.628757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.711 [2024-07-14 10:44:28.631354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.711 [2024-07-14 10:44:28.640809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.711 [2024-07-14 10:44:28.641206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.711 [2024-07-14 10:44:28.641222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.711 [2024-07-14 10:44:28.641235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.711 [2024-07-14 10:44:28.641422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.711 [2024-07-14 10:44:28.641595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.711 [2024-07-14 10:44:28.641605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.711 [2024-07-14 10:44:28.641612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.711 [2024-07-14 10:44:28.644319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.711 [2024-07-14 10:44:28.653735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.711 [2024-07-14 10:44:28.654076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.711 [2024-07-14 10:44:28.654092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.711 [2024-07-14 10:44:28.654099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.711 [2024-07-14 10:44:28.654267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.711 [2024-07-14 10:44:28.654431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.711 [2024-07-14 10:44:28.654441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.711 [2024-07-14 10:44:28.654447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.711 [2024-07-14 10:44:28.657070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.711 [2024-07-14 10:44:28.666561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.711 [2024-07-14 10:44:28.666972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.711 [2024-07-14 10:44:28.666989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.711 [2024-07-14 10:44:28.666995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.711 [2024-07-14 10:44:28.667157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.711 [2024-07-14 10:44:28.667346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.711 [2024-07-14 10:44:28.667357] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.711 [2024-07-14 10:44:28.667363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.711 [2024-07-14 10:44:28.670031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.711 [2024-07-14 10:44:28.679470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.711 [2024-07-14 10:44:28.679925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.711 [2024-07-14 10:44:28.679967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.711 [2024-07-14 10:44:28.679988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.711 [2024-07-14 10:44:28.680504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.711 [2024-07-14 10:44:28.680679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.711 [2024-07-14 10:44:28.680689] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.711 [2024-07-14 10:44:28.680696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.711 [2024-07-14 10:44:28.683345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.971 [2024-07-14 10:44:28.692541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.971 [2024-07-14 10:44:28.692911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.971 [2024-07-14 10:44:28.692928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.971 [2024-07-14 10:44:28.692939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.971 [2024-07-14 10:44:28.693111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.971 [2024-07-14 10:44:28.693287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.971 [2024-07-14 10:44:28.693297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.971 [2024-07-14 10:44:28.693303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.971 [2024-07-14 10:44:28.696003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.971 [2024-07-14 10:44:28.705467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.971 [2024-07-14 10:44:28.705836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.971 [2024-07-14 10:44:28.705879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.971 [2024-07-14 10:44:28.705902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.971 [2024-07-14 10:44:28.706493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.971 [2024-07-14 10:44:28.706712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.971 [2024-07-14 10:44:28.706722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.971 [2024-07-14 10:44:28.706728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.971 [2024-07-14 10:44:28.709399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.971 [2024-07-14 10:44:28.718503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.971 [2024-07-14 10:44:28.718888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.971 [2024-07-14 10:44:28.718906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.971 [2024-07-14 10:44:28.718912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.971 [2024-07-14 10:44:28.719084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.971 [2024-07-14 10:44:28.719263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.971 [2024-07-14 10:44:28.719273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.971 [2024-07-14 10:44:28.719279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.971 [2024-07-14 10:44:28.721896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.971 [2024-07-14 10:44:28.731387] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.971 [2024-07-14 10:44:28.731697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.971 [2024-07-14 10:44:28.731714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.971 [2024-07-14 10:44:28.731721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.971 [2024-07-14 10:44:28.731893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.971 [2024-07-14 10:44:28.732065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.971 [2024-07-14 10:44:28.732078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.971 [2024-07-14 10:44:28.732085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.971 [2024-07-14 10:44:28.734718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.971 [2024-07-14 10:44:28.744582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.971 [2024-07-14 10:44:28.745002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.971 [2024-07-14 10:44:28.745019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.971 [2024-07-14 10:44:28.745026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.971 [2024-07-14 10:44:28.745202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.971 [2024-07-14 10:44:28.745391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.971 [2024-07-14 10:44:28.745404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.971 [2024-07-14 10:44:28.745411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.971 [2024-07-14 10:44:28.748245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.971 [2024-07-14 10:44:28.757758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.971 [2024-07-14 10:44:28.758195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.971 [2024-07-14 10:44:28.758212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.971 [2024-07-14 10:44:28.758219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.971 [2024-07-14 10:44:28.758402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.971 [2024-07-14 10:44:28.758580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.971 [2024-07-14 10:44:28.758590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.971 [2024-07-14 10:44:28.758596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.971 [2024-07-14 10:44:28.761427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.971 [2024-07-14 10:44:28.770992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.971 [2024-07-14 10:44:28.771437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.971 [2024-07-14 10:44:28.771454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.971 [2024-07-14 10:44:28.771462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.971 [2024-07-14 10:44:28.771640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.971 [2024-07-14 10:44:28.771819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.971 [2024-07-14 10:44:28.771830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.971 [2024-07-14 10:44:28.771836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.971 [2024-07-14 10:44:28.774665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.971 [2024-07-14 10:44:28.784188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.971 [2024-07-14 10:44:28.784633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.971 [2024-07-14 10:44:28.784651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.971 [2024-07-14 10:44:28.784658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.971 [2024-07-14 10:44:28.784835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.971 [2024-07-14 10:44:28.785014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.971 [2024-07-14 10:44:28.785024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.971 [2024-07-14 10:44:28.785030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.971 [2024-07-14 10:44:28.787860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.972 [2024-07-14 10:44:28.797373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.972 [2024-07-14 10:44:28.797809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.972 [2024-07-14 10:44:28.797826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.972 [2024-07-14 10:44:28.797834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.972 [2024-07-14 10:44:28.798012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.972 [2024-07-14 10:44:28.798190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.972 [2024-07-14 10:44:28.798199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.972 [2024-07-14 10:44:28.798206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.972 [2024-07-14 10:44:28.801066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.972 [2024-07-14 10:44:28.810409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.972 [2024-07-14 10:44:28.810845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.972 [2024-07-14 10:44:28.810862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.972 [2024-07-14 10:44:28.810869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.972 [2024-07-14 10:44:28.811046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.972 [2024-07-14 10:44:28.811232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.972 [2024-07-14 10:44:28.811242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.972 [2024-07-14 10:44:28.811249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.972 [2024-07-14 10:44:28.814077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.972 [2024-07-14 10:44:28.823594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.972 [2024-07-14 10:44:28.824031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.972 [2024-07-14 10:44:28.824048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.972 [2024-07-14 10:44:28.824055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.972 [2024-07-14 10:44:28.824242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.972 [2024-07-14 10:44:28.824422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.972 [2024-07-14 10:44:28.824432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.972 [2024-07-14 10:44:28.824438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.972 [2024-07-14 10:44:28.827271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.972 [2024-07-14 10:44:28.836799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.972 [2024-07-14 10:44:28.837236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.972 [2024-07-14 10:44:28.837254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.972 [2024-07-14 10:44:28.837261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.972 [2024-07-14 10:44:28.837438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.972 [2024-07-14 10:44:28.837617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.972 [2024-07-14 10:44:28.837628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.972 [2024-07-14 10:44:28.837634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.972 [2024-07-14 10:44:28.840496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.972 [2024-07-14 10:44:28.849909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.972 [2024-07-14 10:44:28.850208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.972 [2024-07-14 10:44:28.850234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.972 [2024-07-14 10:44:28.850242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.972 [2024-07-14 10:44:28.850419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.972 [2024-07-14 10:44:28.850598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.972 [2024-07-14 10:44:28.850609] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.972 [2024-07-14 10:44:28.850617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.972 [2024-07-14 10:44:28.853451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.972 [2024-07-14 10:44:28.863093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.972 [2024-07-14 10:44:28.863517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.972 [2024-07-14 10:44:28.863535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.972 [2024-07-14 10:44:28.863543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.972 [2024-07-14 10:44:28.863721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.972 [2024-07-14 10:44:28.863921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.972 [2024-07-14 10:44:28.863931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.972 [2024-07-14 10:44:28.863942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.972 [2024-07-14 10:44:28.866785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.972 [2024-07-14 10:44:28.876261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.972 [2024-07-14 10:44:28.876609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.972 [2024-07-14 10:44:28.876626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.972 [2024-07-14 10:44:28.876633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.972 [2024-07-14 10:44:28.876810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.972 [2024-07-14 10:44:28.876989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.972 [2024-07-14 10:44:28.876999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.972 [2024-07-14 10:44:28.877005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.972 [2024-07-14 10:44:28.879845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.972 [2024-07-14 10:44:28.889381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.972 [2024-07-14 10:44:28.889801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.972 [2024-07-14 10:44:28.889818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.972 [2024-07-14 10:44:28.889826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.972 [2024-07-14 10:44:28.890003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.972 [2024-07-14 10:44:28.890181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.972 [2024-07-14 10:44:28.890191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.972 [2024-07-14 10:44:28.890197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.972 [2024-07-14 10:44:28.893103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.972 [2024-07-14 10:44:28.902513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.972 [2024-07-14 10:44:28.902897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.972 [2024-07-14 10:44:28.902914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.972 [2024-07-14 10:44:28.902921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.972 [2024-07-14 10:44:28.903098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.972 [2024-07-14 10:44:28.903282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.972 [2024-07-14 10:44:28.903292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.972 [2024-07-14 10:44:28.903299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.972 [2024-07-14 10:44:28.906126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.972 [2024-07-14 10:44:28.915690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.972 [2024-07-14 10:44:28.916130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.972 [2024-07-14 10:44:28.916146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.972 [2024-07-14 10:44:28.916153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.972 [2024-07-14 10:44:28.916339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.972 [2024-07-14 10:44:28.916518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.972 [2024-07-14 10:44:28.916528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.972 [2024-07-14 10:44:28.916535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.972 [2024-07-14 10:44:28.919363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.972 [2024-07-14 10:44:28.928880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.972 [2024-07-14 10:44:28.929298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.972 [2024-07-14 10:44:28.929316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.972 [2024-07-14 10:44:28.929323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.972 [2024-07-14 10:44:28.929500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.972 [2024-07-14 10:44:28.929679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.972 [2024-07-14 10:44:28.929690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.973 [2024-07-14 10:44:28.929696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.973 [2024-07-14 10:44:28.932526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.973 [2024-07-14 10:44:28.942034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.973 [2024-07-14 10:44:28.942456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.973 [2024-07-14 10:44:28.942473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:43.973 [2024-07-14 10:44:28.942480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:43.973 [2024-07-14 10:44:28.942656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:43.973 [2024-07-14 10:44:28.942834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.973 [2024-07-14 10:44:28.942843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.973 [2024-07-14 10:44:28.942849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.973 [2024-07-14 10:44:28.945678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.232 [2024-07-14 10:44:28.955219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.232 [2024-07-14 10:44:28.955644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.232 [2024-07-14 10:44:28.955661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.232 [2024-07-14 10:44:28.955669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.232 [2024-07-14 10:44:28.955850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.232 [2024-07-14 10:44:28.956030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.232 [2024-07-14 10:44:28.956040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.232 [2024-07-14 10:44:28.956047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.232 [2024-07-14 10:44:28.958876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.232 [2024-07-14 10:44:28.968412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.232 [2024-07-14 10:44:28.968844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.232 [2024-07-14 10:44:28.968861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.232 [2024-07-14 10:44:28.968869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.232 [2024-07-14 10:44:28.969045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.232 [2024-07-14 10:44:28.969231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.232 [2024-07-14 10:44:28.969241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.232 [2024-07-14 10:44:28.969247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.232 [2024-07-14 10:44:28.972071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.232 [2024-07-14 10:44:28.981597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.232 [2024-07-14 10:44:28.982046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.232 [2024-07-14 10:44:28.982063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.232 [2024-07-14 10:44:28.982071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.232 [2024-07-14 10:44:28.982255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.232 [2024-07-14 10:44:28.982433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.232 [2024-07-14 10:44:28.982443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.232 [2024-07-14 10:44:28.982449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.232 [2024-07-14 10:44:28.985276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.232 [2024-07-14 10:44:28.994794] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.232 [2024-07-14 10:44:28.995138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.232 [2024-07-14 10:44:28.995156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.232 [2024-07-14 10:44:28.995163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.232 [2024-07-14 10:44:28.995345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.232 [2024-07-14 10:44:28.995523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.232 [2024-07-14 10:44:28.995533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.232 [2024-07-14 10:44:28.995544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.232 [2024-07-14 10:44:28.998376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.232 [2024-07-14 10:44:29.007890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.232 [2024-07-14 10:44:29.008352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.232 [2024-07-14 10:44:29.008372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.232 [2024-07-14 10:44:29.008379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.232 [2024-07-14 10:44:29.008555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.232 [2024-07-14 10:44:29.008732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.232 [2024-07-14 10:44:29.008741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.232 [2024-07-14 10:44:29.008747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.232 [2024-07-14 10:44:29.011582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.232 [2024-07-14 10:44:29.020948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.232 [2024-07-14 10:44:29.021327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.232 [2024-07-14 10:44:29.021371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.233 [2024-07-14 10:44:29.021393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.233 [2024-07-14 10:44:29.021823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.233 [2024-07-14 10:44:29.022002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.233 [2024-07-14 10:44:29.022013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.233 [2024-07-14 10:44:29.022019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.233 [2024-07-14 10:44:29.024871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.233 [2024-07-14 10:44:29.033951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.233 [2024-07-14 10:44:29.034315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.233 [2024-07-14 10:44:29.034359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.233 [2024-07-14 10:44:29.034380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.233 [2024-07-14 10:44:29.034829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.233 [2024-07-14 10:44:29.035004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.233 [2024-07-14 10:44:29.035014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.233 [2024-07-14 10:44:29.035022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.233 [2024-07-14 10:44:29.037767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.233 [2024-07-14 10:44:29.047010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.233 [2024-07-14 10:44:29.047297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.233 [2024-07-14 10:44:29.047355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.233 [2024-07-14 10:44:29.047378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.233 [2024-07-14 10:44:29.047876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.233 [2024-07-14 10:44:29.048040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.233 [2024-07-14 10:44:29.048050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.233 [2024-07-14 10:44:29.048055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.233 [2024-07-14 10:44:29.050814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.233 [2024-07-14 10:44:29.059964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.233 [2024-07-14 10:44:29.060302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.233 [2024-07-14 10:44:29.060319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.233 [2024-07-14 10:44:29.060327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.233 [2024-07-14 10:44:29.060498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.233 [2024-07-14 10:44:29.060671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.233 [2024-07-14 10:44:29.060680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.233 [2024-07-14 10:44:29.060686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.233 [2024-07-14 10:44:29.063399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.233 [2024-07-14 10:44:29.072891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.233 [2024-07-14 10:44:29.073173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.233 [2024-07-14 10:44:29.073189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.233 [2024-07-14 10:44:29.073196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.233 [2024-07-14 10:44:29.073387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.233 [2024-07-14 10:44:29.073562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.233 [2024-07-14 10:44:29.073571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.233 [2024-07-14 10:44:29.073578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.233 [2024-07-14 10:44:29.076239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.233 [2024-07-14 10:44:29.085770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.233 [2024-07-14 10:44:29.086167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.233 [2024-07-14 10:44:29.086183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.233 [2024-07-14 10:44:29.086190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.233 [2024-07-14 10:44:29.086382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.233 [2024-07-14 10:44:29.086561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.233 [2024-07-14 10:44:29.086570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.233 [2024-07-14 10:44:29.086577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.233 [2024-07-14 10:44:29.089236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.233 [2024-07-14 10:44:29.098783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.233 [2024-07-14 10:44:29.099077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.233 [2024-07-14 10:44:29.099093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.233 [2024-07-14 10:44:29.099100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.233 [2024-07-14 10:44:29.099280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.233 [2024-07-14 10:44:29.099452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.233 [2024-07-14 10:44:29.099462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.233 [2024-07-14 10:44:29.099468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.233 [2024-07-14 10:44:29.102214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.233 [2024-07-14 10:44:29.111707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.233 [2024-07-14 10:44:29.112159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.233 [2024-07-14 10:44:29.112201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.233 [2024-07-14 10:44:29.112223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.233 [2024-07-14 10:44:29.112740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.233 [2024-07-14 10:44:29.112931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.233 [2024-07-14 10:44:29.112941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.233 [2024-07-14 10:44:29.112947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.233 [2024-07-14 10:44:29.115779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.233 [2024-07-14 10:44:29.124814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.233 [2024-07-14 10:44:29.125181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.233 [2024-07-14 10:44:29.125198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.233 [2024-07-14 10:44:29.125205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.233 [2024-07-14 10:44:29.125391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.233 [2024-07-14 10:44:29.125571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.233 [2024-07-14 10:44:29.125581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.233 [2024-07-14 10:44:29.125588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.233 [2024-07-14 10:44:29.128419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.233 [2024-07-14 10:44:29.137944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.233 [2024-07-14 10:44:29.138314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.233 [2024-07-14 10:44:29.138331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.233 [2024-07-14 10:44:29.138339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.233 [2024-07-14 10:44:29.138516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.233 [2024-07-14 10:44:29.138693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.233 [2024-07-14 10:44:29.138703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.233 [2024-07-14 10:44:29.138709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.233 [2024-07-14 10:44:29.141631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.233 [2024-07-14 10:44:29.150987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.233 [2024-07-14 10:44:29.151407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.233 [2024-07-14 10:44:29.151424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.233 [2024-07-14 10:44:29.151431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.233 [2024-07-14 10:44:29.151608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.233 [2024-07-14 10:44:29.151786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.233 [2024-07-14 10:44:29.151795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.233 [2024-07-14 10:44:29.151803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.233 [2024-07-14 10:44:29.154634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.234 [2024-07-14 10:44:29.164141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.234 [2024-07-14 10:44:29.164560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.234 [2024-07-14 10:44:29.164577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.234 [2024-07-14 10:44:29.164585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.234 [2024-07-14 10:44:29.164762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.234 [2024-07-14 10:44:29.164941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.234 [2024-07-14 10:44:29.164951] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.234 [2024-07-14 10:44:29.164958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.234 [2024-07-14 10:44:29.167786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.234 [2024-07-14 10:44:29.177301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.234 [2024-07-14 10:44:29.177742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.234 [2024-07-14 10:44:29.177758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.234 [2024-07-14 10:44:29.177769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.234 [2024-07-14 10:44:29.177947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.234 [2024-07-14 10:44:29.178125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.234 [2024-07-14 10:44:29.178134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.234 [2024-07-14 10:44:29.178142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.234 [2024-07-14 10:44:29.180969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.234 [2024-07-14 10:44:29.190485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.234 [2024-07-14 10:44:29.190941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.234 [2024-07-14 10:44:29.190983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.234 [2024-07-14 10:44:29.191005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.234 [2024-07-14 10:44:29.191565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.234 [2024-07-14 10:44:29.191956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.234 [2024-07-14 10:44:29.191974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.234 [2024-07-14 10:44:29.191988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.234 [2024-07-14 10:44:29.198217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.234 [2024-07-14 10:44:29.205672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.234 [2024-07-14 10:44:29.206196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.234 [2024-07-14 10:44:29.206217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.234 [2024-07-14 10:44:29.206233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.234 [2024-07-14 10:44:29.206487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.234 [2024-07-14 10:44:29.206742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.234 [2024-07-14 10:44:29.206754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.234 [2024-07-14 10:44:29.206763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.234 [2024-07-14 10:44:29.210824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.493 [2024-07-14 10:44:29.218776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.493 [2024-07-14 10:44:29.219124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.493 [2024-07-14 10:44:29.219167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.493 [2024-07-14 10:44:29.219189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.493 [2024-07-14 10:44:29.219784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.493 [2024-07-14 10:44:29.220309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.493 [2024-07-14 10:44:29.220322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.493 [2024-07-14 10:44:29.220328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.493 [2024-07-14 10:44:29.223001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.493 [2024-07-14 10:44:29.231676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.493 [2024-07-14 10:44:29.232093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.493 [2024-07-14 10:44:29.232139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.493 [2024-07-14 10:44:29.232161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.493 [2024-07-14 10:44:29.232757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.493 [2024-07-14 10:44:29.232966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.493 [2024-07-14 10:44:29.232976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.493 [2024-07-14 10:44:29.232983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.493 [2024-07-14 10:44:29.235611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.493 [2024-07-14 10:44:29.244570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.493 [2024-07-14 10:44:29.244976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.493 [2024-07-14 10:44:29.244992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.493 [2024-07-14 10:44:29.245000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.493 [2024-07-14 10:44:29.245162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.493 [2024-07-14 10:44:29.245350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.493 [2024-07-14 10:44:29.245360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.493 [2024-07-14 10:44:29.245366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.493 [2024-07-14 10:44:29.248028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.493 [2024-07-14 10:44:29.257432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.493 [2024-07-14 10:44:29.257800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.493 [2024-07-14 10:44:29.257816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.493 [2024-07-14 10:44:29.257824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.493 [2024-07-14 10:44:29.257987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.493 [2024-07-14 10:44:29.258151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.493 [2024-07-14 10:44:29.258160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.493 [2024-07-14 10:44:29.258166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.493 [2024-07-14 10:44:29.260917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.493 [2024-07-14 10:44:29.270332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.493 [2024-07-14 10:44:29.270750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.493 [2024-07-14 10:44:29.270792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.493 [2024-07-14 10:44:29.270814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.493 [2024-07-14 10:44:29.271240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.493 [2024-07-14 10:44:29.271428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.493 [2024-07-14 10:44:29.271439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.493 [2024-07-14 10:44:29.271445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.493 [2024-07-14 10:44:29.274099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.493 [2024-07-14 10:44:29.283218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.493 [2024-07-14 10:44:29.283656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.493 [2024-07-14 10:44:29.283699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.493 [2024-07-14 10:44:29.283721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.493 [2024-07-14 10:44:29.284260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.493 [2024-07-14 10:44:29.284449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.493 [2024-07-14 10:44:29.284459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.493 [2024-07-14 10:44:29.284465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.493 [2024-07-14 10:44:29.287120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.493 [2024-07-14 10:44:29.296089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.493 [2024-07-14 10:44:29.296441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.493 [2024-07-14 10:44:29.296457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.493 [2024-07-14 10:44:29.296463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.493 [2024-07-14 10:44:29.296625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.493 [2024-07-14 10:44:29.296788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.494 [2024-07-14 10:44:29.296797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.494 [2024-07-14 10:44:29.296803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.494 [2024-07-14 10:44:29.299494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.494 [2024-07-14 10:44:29.308971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.494 [2024-07-14 10:44:29.309399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.494 [2024-07-14 10:44:29.309442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.494 [2024-07-14 10:44:29.309464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.494 [2024-07-14 10:44:29.310056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.494 [2024-07-14 10:44:29.310478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.494 [2024-07-14 10:44:29.310488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.494 [2024-07-14 10:44:29.310494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.494 [2024-07-14 10:44:29.313077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.494 [2024-07-14 10:44:29.321895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.494 [2024-07-14 10:44:29.322314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.494 [2024-07-14 10:44:29.322330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.494 [2024-07-14 10:44:29.322337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.494 [2024-07-14 10:44:29.322499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.494 [2024-07-14 10:44:29.322662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.494 [2024-07-14 10:44:29.322671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.494 [2024-07-14 10:44:29.322677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.494 [2024-07-14 10:44:29.325380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.494 [2024-07-14 10:44:29.334797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.494 [2024-07-14 10:44:29.335208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.494 [2024-07-14 10:44:29.335262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.494 [2024-07-14 10:44:29.335284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.494 [2024-07-14 10:44:29.335862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.494 [2024-07-14 10:44:29.336451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.494 [2024-07-14 10:44:29.336461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.494 [2024-07-14 10:44:29.336467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.494 [2024-07-14 10:44:29.339121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.494 [2024-07-14 10:44:29.347622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.494 [2024-07-14 10:44:29.348024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.494 [2024-07-14 10:44:29.348039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.494 [2024-07-14 10:44:29.348047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.494 [2024-07-14 10:44:29.348209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.494 [2024-07-14 10:44:29.348400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.494 [2024-07-14 10:44:29.348410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.494 [2024-07-14 10:44:29.348420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.494 [2024-07-14 10:44:29.351082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.494 [2024-07-14 10:44:29.360543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.494 [2024-07-14 10:44:29.360990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.494 [2024-07-14 10:44:29.361032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.494 [2024-07-14 10:44:29.361054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.494 [2024-07-14 10:44:29.361485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.494 [2024-07-14 10:44:29.361659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.494 [2024-07-14 10:44:29.361669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.494 [2024-07-14 10:44:29.361676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.494 [2024-07-14 10:44:29.364370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.494 [2024-07-14 10:44:29.373699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.494 [2024-07-14 10:44:29.374123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.494 [2024-07-14 10:44:29.374165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.494 [2024-07-14 10:44:29.374186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.494 [2024-07-14 10:44:29.374691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.494 [2024-07-14 10:44:29.374855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.494 [2024-07-14 10:44:29.374864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.494 [2024-07-14 10:44:29.374871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.494 [2024-07-14 10:44:29.377560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.494 [2024-07-14 10:44:29.386553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.494 [2024-07-14 10:44:29.386981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.494 [2024-07-14 10:44:29.386997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.494 [2024-07-14 10:44:29.387004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.494 [2024-07-14 10:44:29.387165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.494 [2024-07-14 10:44:29.387354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.494 [2024-07-14 10:44:29.387364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.494 [2024-07-14 10:44:29.387370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.494 [2024-07-14 10:44:29.390083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.494 [2024-07-14 10:44:29.399354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.494 [2024-07-14 10:44:29.399707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.494 [2024-07-14 10:44:29.399722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.494 [2024-07-14 10:44:29.399729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.494 [2024-07-14 10:44:29.399890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.494 [2024-07-14 10:44:29.400053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.494 [2024-07-14 10:44:29.400062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.494 [2024-07-14 10:44:29.400068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.494 [2024-07-14 10:44:29.402758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.494 [2024-07-14 10:44:29.412218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.494 [2024-07-14 10:44:29.412579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.494 [2024-07-14 10:44:29.412621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.494 [2024-07-14 10:44:29.412642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.494 [2024-07-14 10:44:29.413220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.494 [2024-07-14 10:44:29.413605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.494 [2024-07-14 10:44:29.413615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.494 [2024-07-14 10:44:29.413621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.494 [2024-07-14 10:44:29.416208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.494 [2024-07-14 10:44:29.425135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.494 [2024-07-14 10:44:29.425594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.494 [2024-07-14 10:44:29.425637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.494 [2024-07-14 10:44:29.425660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.494 [2024-07-14 10:44:29.426253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.494 [2024-07-14 10:44:29.426493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.494 [2024-07-14 10:44:29.426504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.494 [2024-07-14 10:44:29.426510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.494 [2024-07-14 10:44:29.429165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.494 [2024-07-14 10:44:29.437986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.494 [2024-07-14 10:44:29.438413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.494 [2024-07-14 10:44:29.438429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.494 [2024-07-14 10:44:29.438436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.494 [2024-07-14 10:44:29.438599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.495 [2024-07-14 10:44:29.438765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.495 [2024-07-14 10:44:29.438774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.495 [2024-07-14 10:44:29.438780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.495 [2024-07-14 10:44:29.441467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.495 [2024-07-14 10:44:29.450900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.495 [2024-07-14 10:44:29.451334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-07-14 10:44:29.451376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.495 [2024-07-14 10:44:29.451399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.495 [2024-07-14 10:44:29.451624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.495 [2024-07-14 10:44:29.451789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.495 [2024-07-14 10:44:29.451798] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.495 [2024-07-14 10:44:29.451804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.495 [2024-07-14 10:44:29.454490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.495 [2024-07-14 10:44:29.463812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.495 [2024-07-14 10:44:29.464141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-07-14 10:44:29.464157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.495 [2024-07-14 10:44:29.464164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.495 [2024-07-14 10:44:29.464349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.495 [2024-07-14 10:44:29.464528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.495 [2024-07-14 10:44:29.464539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.495 [2024-07-14 10:44:29.464545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.495 [2024-07-14 10:44:29.467202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.754 [2024-07-14 10:44:29.476729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.754 [2024-07-14 10:44:29.477150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.754 [2024-07-14 10:44:29.477167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.754 [2024-07-14 10:44:29.477175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.754 [2024-07-14 10:44:29.477383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.754 [2024-07-14 10:44:29.477557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.754 [2024-07-14 10:44:29.477567] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.754 [2024-07-14 10:44:29.477575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.754 [2024-07-14 10:44:29.480234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.754 [2024-07-14 10:44:29.489648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.754 [2024-07-14 10:44:29.490008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.754 [2024-07-14 10:44:29.490049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.754 [2024-07-14 10:44:29.490070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.754 [2024-07-14 10:44:29.490534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.754 [2024-07-14 10:44:29.490709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.754 [2024-07-14 10:44:29.490718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.754 [2024-07-14 10:44:29.490725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.754 [2024-07-14 10:44:29.493420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.754 [2024-07-14 10:44:29.502435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.754 [2024-07-14 10:44:29.502854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.754 [2024-07-14 10:44:29.502907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.754 [2024-07-14 10:44:29.502929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.754 [2024-07-14 10:44:29.503501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.754 [2024-07-14 10:44:29.503675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.754 [2024-07-14 10:44:29.503685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.754 [2024-07-14 10:44:29.503691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.754 [2024-07-14 10:44:29.506336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.754 [2024-07-14 10:44:29.515356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.754 [2024-07-14 10:44:29.515793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.754 [2024-07-14 10:44:29.515835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.754 [2024-07-14 10:44:29.515856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.754 [2024-07-14 10:44:29.516451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.754 [2024-07-14 10:44:29.516976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.754 [2024-07-14 10:44:29.516985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.754 [2024-07-14 10:44:29.516991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.754 [2024-07-14 10:44:29.519581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.754 [2024-07-14 10:44:29.528149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.754 [2024-07-14 10:44:29.528585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.754 [2024-07-14 10:44:29.528635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.754 [2024-07-14 10:44:29.528657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.754 [2024-07-14 10:44:29.529071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.754 [2024-07-14 10:44:29.529241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.754 [2024-07-14 10:44:29.529267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.754 [2024-07-14 10:44:29.529273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.754 [2024-07-14 10:44:29.531935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.754 [2024-07-14 10:44:29.541018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.754 [2024-07-14 10:44:29.541453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.754 [2024-07-14 10:44:29.541495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.754 [2024-07-14 10:44:29.541517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.754 [2024-07-14 10:44:29.541724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.754 [2024-07-14 10:44:29.541898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.754 [2024-07-14 10:44:29.541907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.754 [2024-07-14 10:44:29.541914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.754 [2024-07-14 10:44:29.544660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.754 [2024-07-14 10:44:29.553936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.754 [2024-07-14 10:44:29.554380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.754 [2024-07-14 10:44:29.554396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.754 [2024-07-14 10:44:29.554403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.754 [2024-07-14 10:44:29.554565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.754 [2024-07-14 10:44:29.554727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.754 [2024-07-14 10:44:29.554736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.754 [2024-07-14 10:44:29.554743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.754 [2024-07-14 10:44:29.557438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.754 [2024-07-14 10:44:29.566782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.754 [2024-07-14 10:44:29.567211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.754 [2024-07-14 10:44:29.567265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.754 [2024-07-14 10:44:29.567288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.754 [2024-07-14 10:44:29.567698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.754 [2024-07-14 10:44:29.567867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.754 [2024-07-14 10:44:29.567876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.754 [2024-07-14 10:44:29.567882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.754 [2024-07-14 10:44:29.570574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.754 [2024-07-14 10:44:29.579686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.754 [2024-07-14 10:44:29.580026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.754 [2024-07-14 10:44:29.580041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.754 [2024-07-14 10:44:29.580048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.754 [2024-07-14 10:44:29.580210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.754 [2024-07-14 10:44:29.580401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.754 [2024-07-14 10:44:29.580411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.754 [2024-07-14 10:44:29.580417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.754 [2024-07-14 10:44:29.583075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.754 [2024-07-14 10:44:29.592498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.754 [2024-07-14 10:44:29.592844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.754 [2024-07-14 10:44:29.592860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.754 [2024-07-14 10:44:29.592867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.754 [2024-07-14 10:44:29.593030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.754 [2024-07-14 10:44:29.593193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.754 [2024-07-14 10:44:29.593202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.754 [2024-07-14 10:44:29.593208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.754 [2024-07-14 10:44:29.595899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.754 [2024-07-14 10:44:29.605326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.754 [2024-07-14 10:44:29.605741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.754 [2024-07-14 10:44:29.605758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.754 [2024-07-14 10:44:29.605765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.755 [2024-07-14 10:44:29.605927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.755 [2024-07-14 10:44:29.606090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.755 [2024-07-14 10:44:29.606099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.755 [2024-07-14 10:44:29.606105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.755 [2024-07-14 10:44:29.608794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.755 [2024-07-14 10:44:29.618161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.755 [2024-07-14 10:44:29.618504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.755 [2024-07-14 10:44:29.618521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.755 [2024-07-14 10:44:29.618528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.755 [2024-07-14 10:44:29.618690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.755 [2024-07-14 10:44:29.618852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.755 [2024-07-14 10:44:29.618861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.755 [2024-07-14 10:44:29.618867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.755 [2024-07-14 10:44:29.621690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.755 [2024-07-14 10:44:29.631194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.755 [2024-07-14 10:44:29.631638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.755 [2024-07-14 10:44:29.631655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.755 [2024-07-14 10:44:29.631662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.755 [2024-07-14 10:44:29.631835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.755 [2024-07-14 10:44:29.632010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.755 [2024-07-14 10:44:29.632019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.755 [2024-07-14 10:44:29.632026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.755 [2024-07-14 10:44:29.634814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.755 [2024-07-14 10:44:29.644189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.755 [2024-07-14 10:44:29.644622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.755 [2024-07-14 10:44:29.644640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.755 [2024-07-14 10:44:29.644647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.755 [2024-07-14 10:44:29.644822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.755 [2024-07-14 10:44:29.644985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.755 [2024-07-14 10:44:29.644994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.755 [2024-07-14 10:44:29.645000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.755 [2024-07-14 10:44:29.647689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.755 [2024-07-14 10:44:29.657049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.755 [2024-07-14 10:44:29.657402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.755 [2024-07-14 10:44:29.657418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.755 [2024-07-14 10:44:29.657428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.755 [2024-07-14 10:44:29.657591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.755 [2024-07-14 10:44:29.657755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.755 [2024-07-14 10:44:29.657764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.755 [2024-07-14 10:44:29.657770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.755 [2024-07-14 10:44:29.660451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.755 [2024-07-14 10:44:29.669941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.755 [2024-07-14 10:44:29.670353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.755 [2024-07-14 10:44:29.670396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.755 [2024-07-14 10:44:29.670419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.755 [2024-07-14 10:44:29.670997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.755 [2024-07-14 10:44:29.671184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.755 [2024-07-14 10:44:29.671193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.755 [2024-07-14 10:44:29.671199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.755 [2024-07-14 10:44:29.673943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.755 [2024-07-14 10:44:29.682863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.755 [2024-07-14 10:44:29.683263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.755 [2024-07-14 10:44:29.683279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.755 [2024-07-14 10:44:29.683287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.755 [2024-07-14 10:44:29.683449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.755 [2024-07-14 10:44:29.683611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.755 [2024-07-14 10:44:29.683620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.755 [2024-07-14 10:44:29.683626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.755 [2024-07-14 10:44:29.686319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.755 [2024-07-14 10:44:29.695678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.755 [2024-07-14 10:44:29.696115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.755 [2024-07-14 10:44:29.696157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.755 [2024-07-14 10:44:29.696179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.755 [2024-07-14 10:44:29.696761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.755 [2024-07-14 10:44:29.697151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.755 [2024-07-14 10:44:29.697174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.755 [2024-07-14 10:44:29.697187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.755 [2024-07-14 10:44:29.703423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.755 [2024-07-14 10:44:29.710756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.755 [2024-07-14 10:44:29.711277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.755 [2024-07-14 10:44:29.711325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.755 [2024-07-14 10:44:29.711347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.755 [2024-07-14 10:44:29.711925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.755 [2024-07-14 10:44:29.712436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.755 [2024-07-14 10:44:29.712450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.755 [2024-07-14 10:44:29.712459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.755 [2024-07-14 10:44:29.716513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.755 [2024-07-14 10:44:29.723699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.755 [2024-07-14 10:44:29.724067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.755 [2024-07-14 10:44:29.724108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:44.755 [2024-07-14 10:44:29.724130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:44.755 [2024-07-14 10:44:29.724724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:44.755 [2024-07-14 10:44:29.725315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.755 [2024-07-14 10:44:29.725348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.755 [2024-07-14 10:44:29.725355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.755 [2024-07-14 10:44:29.728017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.016 [2024-07-14 10:44:29.736656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.016 [2024-07-14 10:44:29.737081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.016 [2024-07-14 10:44:29.737098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.016 [2024-07-14 10:44:29.737105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.016 [2024-07-14 10:44:29.737284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.016 [2024-07-14 10:44:29.737457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.016 [2024-07-14 10:44:29.737467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.016 [2024-07-14 10:44:29.737474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.016 [2024-07-14 10:44:29.740179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.016 [2024-07-14 10:44:29.749601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.016 [2024-07-14 10:44:29.750039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.016 [2024-07-14 10:44:29.750081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.016 [2024-07-14 10:44:29.750103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.016 [2024-07-14 10:44:29.750693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.016 [2024-07-14 10:44:29.751288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.016 [2024-07-14 10:44:29.751315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.016 [2024-07-14 10:44:29.751335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.016 [2024-07-14 10:44:29.753992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.016 [2024-07-14 10:44:29.762564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.016 [2024-07-14 10:44:29.762980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.016 [2024-07-14 10:44:29.763021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.016 [2024-07-14 10:44:29.763044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.016 [2024-07-14 10:44:29.763546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.016 [2024-07-14 10:44:29.763711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.016 [2024-07-14 10:44:29.763720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.016 [2024-07-14 10:44:29.763726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.016 [2024-07-14 10:44:29.766341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.016 [2024-07-14 10:44:29.775491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.016 [2024-07-14 10:44:29.775923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.016 [2024-07-14 10:44:29.775966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.016 [2024-07-14 10:44:29.775988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.016 [2024-07-14 10:44:29.776469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.016 [2024-07-14 10:44:29.776649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.016 [2024-07-14 10:44:29.776658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.016 [2024-07-14 10:44:29.776665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.016 [2024-07-14 10:44:29.779394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.016 [2024-07-14 10:44:29.788477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.016 [2024-07-14 10:44:29.788903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.016 [2024-07-14 10:44:29.788919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.016 [2024-07-14 10:44:29.788926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.016 [2024-07-14 10:44:29.789091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.016 [2024-07-14 10:44:29.789259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.016 [2024-07-14 10:44:29.789269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.016 [2024-07-14 10:44:29.789275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.016 [2024-07-14 10:44:29.791957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.016 [2024-07-14 10:44:29.801340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.016 [2024-07-14 10:44:29.801673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.016 [2024-07-14 10:44:29.801716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.016 [2024-07-14 10:44:29.801738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.016 [2024-07-14 10:44:29.802314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.016 [2024-07-14 10:44:29.802488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.016 [2024-07-14 10:44:29.802498] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.016 [2024-07-14 10:44:29.802504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.016 [2024-07-14 10:44:29.805167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.016 [2024-07-14 10:44:29.814302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.016 [2024-07-14 10:44:29.814710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.016 [2024-07-14 10:44:29.814727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.016 [2024-07-14 10:44:29.814734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.016 [2024-07-14 10:44:29.814906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.016 [2024-07-14 10:44:29.815081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.016 [2024-07-14 10:44:29.815090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.016 [2024-07-14 10:44:29.815096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.016 [2024-07-14 10:44:29.817843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2622774 Killed "${NVMF_APP[@]}" "$@" 00:35:45.016 10:44:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:45.016 10:44:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:45.016 10:44:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:45.016 10:44:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:45.016 [2024-07-14 10:44:29.827472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.016 10:44:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:45.016 [2024-07-14 10:44:29.827755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.016 [2024-07-14 10:44:29.827773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.016 [2024-07-14 10:44:29.827784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.016 [2024-07-14 10:44:29.827960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.017 [2024-07-14 10:44:29.828139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.017 [2024-07-14 10:44:29.828149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.017 [2024-07-14 10:44:29.828155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.017 [2024-07-14 10:44:29.830991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.017 10:44:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2624181 00:35:45.017 10:44:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2624181 00:35:45.017 10:44:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:45.017 10:44:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2624181 ']' 00:35:45.017 10:44:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:45.017 10:44:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:45.017 10:44:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:45.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:45.017 10:44:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:45.017 10:44:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:45.017 [2024-07-14 10:44:29.840513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.017 [2024-07-14 10:44:29.840950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.017 [2024-07-14 10:44:29.840967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.017 [2024-07-14 10:44:29.840975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.017 [2024-07-14 10:44:29.841151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.017 [2024-07-14 10:44:29.841332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.017 [2024-07-14 10:44:29.841343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.017 [2024-07-14 10:44:29.841350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.017 [2024-07-14 10:44:29.844177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.017 [2024-07-14 10:44:29.853694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.017 [2024-07-14 10:44:29.854131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.017 [2024-07-14 10:44:29.854148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.017 [2024-07-14 10:44:29.854155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.017 [2024-07-14 10:44:29.854363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.017 [2024-07-14 10:44:29.854542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.017 [2024-07-14 10:44:29.854552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.017 [2024-07-14 10:44:29.854562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.017 [2024-07-14 10:44:29.857388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.017 [2024-07-14 10:44:29.866777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.017 [2024-07-14 10:44:29.867231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.017 [2024-07-14 10:44:29.867250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.017 [2024-07-14 10:44:29.867257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.017 [2024-07-14 10:44:29.867436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.017 [2024-07-14 10:44:29.867622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.017 [2024-07-14 10:44:29.867631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.017 [2024-07-14 10:44:29.867638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.017 [2024-07-14 10:44:29.870417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.017 [2024-07-14 10:44:29.879960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.017 [2024-07-14 10:44:29.880326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.017 [2024-07-14 10:44:29.880349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.017 [2024-07-14 10:44:29.880357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.017 [2024-07-14 10:44:29.880535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.017 [2024-07-14 10:44:29.880679] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:45.017 [2024-07-14 10:44:29.880714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.017 [2024-07-14 10:44:29.880721] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5[2024-07-14 10:44:29.880722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] contr --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=oller reinitialization failed 00:35:45.017 auto ] 00:35:45.017 [2024-07-14 10:44:29.880733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.017 [2024-07-14 10:44:29.883514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.017 [2024-07-14 10:44:29.892946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.017 [2024-07-14 10:44:29.893366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.017 [2024-07-14 10:44:29.893383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.017 [2024-07-14 10:44:29.893391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.017 [2024-07-14 10:44:29.893576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.017 [2024-07-14 10:44:29.893749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.017 [2024-07-14 10:44:29.893758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.017 [2024-07-14 10:44:29.893765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.017 [2024-07-14 10:44:29.896713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.017 [2024-07-14 10:44:29.906109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.017 [2024-07-14 10:44:29.906556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.017 [2024-07-14 10:44:29.906573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.017 [2024-07-14 10:44:29.906581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.017 [2024-07-14 10:44:29.906752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.017 [2024-07-14 10:44:29.906924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.017 [2024-07-14 10:44:29.906933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.017 [2024-07-14 10:44:29.906940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.017 EAL: No free 2048 kB hugepages reported on node 1 00:35:45.017 [2024-07-14 10:44:29.909746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.017 [2024-07-14 10:44:29.919253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.017 [2024-07-14 10:44:29.919659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.017 [2024-07-14 10:44:29.919676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.017 [2024-07-14 10:44:29.919684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.017 [2024-07-14 10:44:29.919861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.017 [2024-07-14 10:44:29.920040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.017 [2024-07-14 10:44:29.920050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.017 [2024-07-14 10:44:29.920056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.017 [2024-07-14 10:44:29.922927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.017 [2024-07-14 10:44:29.932348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.017 [2024-07-14 10:44:29.932710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.017 [2024-07-14 10:44:29.932728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.017 [2024-07-14 10:44:29.932735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.017 [2024-07-14 10:44:29.932908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.017 [2024-07-14 10:44:29.933080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.017 [2024-07-14 10:44:29.933088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.017 [2024-07-14 10:44:29.933095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.017 [2024-07-14 10:44:29.935929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.017 [2024-07-14 10:44:29.945478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.017 [2024-07-14 10:44:29.945841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.017 [2024-07-14 10:44:29.945858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.017 [2024-07-14 10:44:29.945871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.017 [2024-07-14 10:44:29.946049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.017 [2024-07-14 10:44:29.946232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.017 [2024-07-14 10:44:29.946242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.017 [2024-07-14 10:44:29.946249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.018 [2024-07-14 10:44:29.949033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.018 [2024-07-14 10:44:29.954083] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:45.018 [2024-07-14 10:44:29.958614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.018 [2024-07-14 10:44:29.959049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.018 [2024-07-14 10:44:29.959067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.018 [2024-07-14 10:44:29.959075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.018 [2024-07-14 10:44:29.959252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.018 [2024-07-14 10:44:29.959425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.018 [2024-07-14 10:44:29.959434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.018 [2024-07-14 10:44:29.959441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.018 [2024-07-14 10:44:29.962266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.018 [2024-07-14 10:44:29.971649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.018 [2024-07-14 10:44:29.971992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.018 [2024-07-14 10:44:29.972010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.018 [2024-07-14 10:44:29.972017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.018 [2024-07-14 10:44:29.972190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.018 [2024-07-14 10:44:29.972389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.018 [2024-07-14 10:44:29.972399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.018 [2024-07-14 10:44:29.972406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.018 [2024-07-14 10:44:29.975221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.018 [2024-07-14 10:44:29.984816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.018 [2024-07-14 10:44:29.985295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.018 [2024-07-14 10:44:29.985318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.018 [2024-07-14 10:44:29.985327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.018 [2024-07-14 10:44:29.985508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.018 [2024-07-14 10:44:29.985692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.018 [2024-07-14 10:44:29.985702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.018 [2024-07-14 10:44:29.985709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.018 [2024-07-14 10:44:29.988487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.278 [2024-07-14 10:44:29.994733] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:45.278 [2024-07-14 10:44:29.994764] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:45.278 [2024-07-14 10:44:29.994771] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:45.278 [2024-07-14 10:44:29.994778] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:45.278 [2024-07-14 10:44:29.994784] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:45.278 [2024-07-14 10:44:29.994829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:45.278 [2024-07-14 10:44:29.994938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:45.278 [2024-07-14 10:44:29.994939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:45.278 [2024-07-14 10:44:29.998019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.278 [2024-07-14 10:44:29.998491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.278 [2024-07-14 10:44:29.998511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.278 [2024-07-14 10:44:29.998520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.278 [2024-07-14 10:44:29.998699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.278 [2024-07-14 10:44:29.998881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.278 [2024-07-14 10:44:29.998892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.278 [2024-07-14 10:44:29.998901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.278 [2024-07-14 10:44:30.001740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.278 [2024-07-14 10:44:30.011392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.278 [2024-07-14 10:44:30.012140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.278 [2024-07-14 10:44:30.012169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.278 [2024-07-14 10:44:30.012180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.278 [2024-07-14 10:44:30.012454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.278 [2024-07-14 10:44:30.012847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.278 [2024-07-14 10:44:30.012878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.278 [2024-07-14 10:44:30.012915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.278 [2024-07-14 10:44:30.016085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.278 [2024-07-14 10:44:30.024493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.278 [2024-07-14 10:44:30.024882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.278 [2024-07-14 10:44:30.024908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.278 [2024-07-14 10:44:30.024918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.278 [2024-07-14 10:44:30.025100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.278 [2024-07-14 10:44:30.025286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.278 [2024-07-14 10:44:30.025297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.278 [2024-07-14 10:44:30.025304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.278 [2024-07-14 10:44:30.028133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.278 [2024-07-14 10:44:30.037657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.278 [2024-07-14 10:44:30.038107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.278 [2024-07-14 10:44:30.038128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.278 [2024-07-14 10:44:30.038137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.278 [2024-07-14 10:44:30.038324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.278 [2024-07-14 10:44:30.038503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.278 [2024-07-14 10:44:30.038513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.278 [2024-07-14 10:44:30.038521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.278 [2024-07-14 10:44:30.041353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.279 [2024-07-14 10:44:30.050705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.279 [2024-07-14 10:44:30.051172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.279 [2024-07-14 10:44:30.051193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.279 [2024-07-14 10:44:30.051202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.279 [2024-07-14 10:44:30.051389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.279 [2024-07-14 10:44:30.051570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.279 [2024-07-14 10:44:30.051580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.279 [2024-07-14 10:44:30.051587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.279 [2024-07-14 10:44:30.054417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.279 [2024-07-14 10:44:30.063768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.279 [2024-07-14 10:44:30.064127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.279 [2024-07-14 10:44:30.064145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.279 [2024-07-14 10:44:30.064154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.279 [2024-07-14 10:44:30.064338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.279 [2024-07-14 10:44:30.064523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.279 [2024-07-14 10:44:30.064534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.279 [2024-07-14 10:44:30.064541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.279 [2024-07-14 10:44:30.067370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.279 [2024-07-14 10:44:30.077512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.279 [2024-07-14 10:44:30.077902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.279 [2024-07-14 10:44:30.077921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.279 [2024-07-14 10:44:30.077930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.279 [2024-07-14 10:44:30.078183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.279 [2024-07-14 10:44:30.078657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.279 [2024-07-14 10:44:30.078718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.279 [2024-07-14 10:44:30.078777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.279 [2024-07-14 10:44:30.082886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.279 [2024-07-14 10:44:30.090564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.279 [2024-07-14 10:44:30.090991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.279 [2024-07-14 10:44:30.091009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.279 [2024-07-14 10:44:30.091017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.279 [2024-07-14 10:44:30.091195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.279 [2024-07-14 10:44:30.091377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.279 [2024-07-14 10:44:30.091387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.279 [2024-07-14 10:44:30.091394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.279 [2024-07-14 10:44:30.094221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.279 [2024-07-14 10:44:30.103738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.279 [2024-07-14 10:44:30.104105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.279 [2024-07-14 10:44:30.104123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.279 [2024-07-14 10:44:30.104130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.279 [2024-07-14 10:44:30.104312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.279 [2024-07-14 10:44:30.104491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.279 [2024-07-14 10:44:30.104501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.279 [2024-07-14 10:44:30.104507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.279 [2024-07-14 10:44:30.107334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.279 [2024-07-14 10:44:30.116844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.279 [2024-07-14 10:44:30.117280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.279 [2024-07-14 10:44:30.117298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.279 [2024-07-14 10:44:30.117306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.279 [2024-07-14 10:44:30.117483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.279 [2024-07-14 10:44:30.117662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.279 [2024-07-14 10:44:30.117671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.279 [2024-07-14 10:44:30.117677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.279 [2024-07-14 10:44:30.120505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.279 [2024-07-14 10:44:30.130020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.279 [2024-07-14 10:44:30.130394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.279 [2024-07-14 10:44:30.130412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.279 [2024-07-14 10:44:30.130419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.279 [2024-07-14 10:44:30.130597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.279 [2024-07-14 10:44:30.130775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.279 [2024-07-14 10:44:30.130785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.279 [2024-07-14 10:44:30.130792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.279 [2024-07-14 10:44:30.133623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.279 [2024-07-14 10:44:30.143134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.279 [2024-07-14 10:44:30.143506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.279 [2024-07-14 10:44:30.143523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.279 [2024-07-14 10:44:30.143531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.279 [2024-07-14 10:44:30.143708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.279 [2024-07-14 10:44:30.143887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.279 [2024-07-14 10:44:30.143896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.279 [2024-07-14 10:44:30.143903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.279 [2024-07-14 10:44:30.146731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.279 [2024-07-14 10:44:30.156236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.279 [2024-07-14 10:44:30.156679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.279 [2024-07-14 10:44:30.156696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.279 [2024-07-14 10:44:30.156707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.279 [2024-07-14 10:44:30.156884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.279 [2024-07-14 10:44:30.157063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.279 [2024-07-14 10:44:30.157072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.279 [2024-07-14 10:44:30.157079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.279 [2024-07-14 10:44:30.159909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.279 [2024-07-14 10:44:30.169425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.279 [2024-07-14 10:44:30.169870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.279 [2024-07-14 10:44:30.169886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.279 [2024-07-14 10:44:30.169894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.279 [2024-07-14 10:44:30.170072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.279 [2024-07-14 10:44:30.170253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.279 [2024-07-14 10:44:30.170264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.279 [2024-07-14 10:44:30.170270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.279 [2024-07-14 10:44:30.173097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.279 [2024-07-14 10:44:30.182611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.279 [2024-07-14 10:44:30.182964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.279 [2024-07-14 10:44:30.182982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.279 [2024-07-14 10:44:30.182989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.279 [2024-07-14 10:44:30.183166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.279 [2024-07-14 10:44:30.183347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.279 [2024-07-14 10:44:30.183357] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.280 [2024-07-14 10:44:30.183364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.280 [2024-07-14 10:44:30.186191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.280 [2024-07-14 10:44:30.195716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.280 [2024-07-14 10:44:30.196155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.280 [2024-07-14 10:44:30.196172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.280 [2024-07-14 10:44:30.196180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.280 [2024-07-14 10:44:30.196360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.280 [2024-07-14 10:44:30.196538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.280 [2024-07-14 10:44:30.196551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.280 [2024-07-14 10:44:30.196558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.280 [2024-07-14 10:44:30.199386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.280 [2024-07-14 10:44:30.208891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.280 [2024-07-14 10:44:30.209362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.280 [2024-07-14 10:44:30.209382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.280 [2024-07-14 10:44:30.209389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.280 [2024-07-14 10:44:30.209567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.280 [2024-07-14 10:44:30.209746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.280 [2024-07-14 10:44:30.209756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.280 [2024-07-14 10:44:30.209763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.280 [2024-07-14 10:44:30.212590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.280 [2024-07-14 10:44:30.221947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.280 [2024-07-14 10:44:30.222378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.280 [2024-07-14 10:44:30.222396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.280 [2024-07-14 10:44:30.222404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.280 [2024-07-14 10:44:30.222583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.280 [2024-07-14 10:44:30.222765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.280 [2024-07-14 10:44:30.222776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.280 [2024-07-14 10:44:30.222784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.280 [2024-07-14 10:44:30.225611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.280 [2024-07-14 10:44:30.235120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.280 [2024-07-14 10:44:30.235568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.280 [2024-07-14 10:44:30.235585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.280 [2024-07-14 10:44:30.235593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.280 [2024-07-14 10:44:30.235770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.280 [2024-07-14 10:44:30.235949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.280 [2024-07-14 10:44:30.235959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.280 [2024-07-14 10:44:30.235966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.280 [2024-07-14 10:44:30.238795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.280 [2024-07-14 10:44:30.248185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.280 [2024-07-14 10:44:30.248614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.280 [2024-07-14 10:44:30.248634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.280 [2024-07-14 10:44:30.248641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.280 [2024-07-14 10:44:30.248818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.280 [2024-07-14 10:44:30.248997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.280 [2024-07-14 10:44:30.249007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.280 [2024-07-14 10:44:30.249013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.280 [2024-07-14 10:44:30.252051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.541 [2024-07-14 10:44:30.261244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.541 [2024-07-14 10:44:30.261594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.541 [2024-07-14 10:44:30.261612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.541 [2024-07-14 10:44:30.261620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.541 [2024-07-14 10:44:30.261798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.541 [2024-07-14 10:44:30.261977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.541 [2024-07-14 10:44:30.261987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.541 [2024-07-14 10:44:30.261995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.541 [2024-07-14 10:44:30.264831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.541 [2024-07-14 10:44:30.274370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.541 [2024-07-14 10:44:30.274748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.541 [2024-07-14 10:44:30.274765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.541 [2024-07-14 10:44:30.274773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.541 [2024-07-14 10:44:30.274951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.541 [2024-07-14 10:44:30.275130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.541 [2024-07-14 10:44:30.275141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.541 [2024-07-14 10:44:30.275148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.541 [2024-07-14 10:44:30.277977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.541 [2024-07-14 10:44:30.287500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.541 [2024-07-14 10:44:30.287887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.541 [2024-07-14 10:44:30.287904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.541 [2024-07-14 10:44:30.287915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.541 [2024-07-14 10:44:30.288093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.541 [2024-07-14 10:44:30.288277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.541 [2024-07-14 10:44:30.288287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.541 [2024-07-14 10:44:30.288295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.541 [2024-07-14 10:44:30.291123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.541 [2024-07-14 10:44:30.300643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.541 [2024-07-14 10:44:30.301090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.541 [2024-07-14 10:44:30.301107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.541 [2024-07-14 10:44:30.301114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.541 [2024-07-14 10:44:30.301297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.541 [2024-07-14 10:44:30.301475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.541 [2024-07-14 10:44:30.301485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.541 [2024-07-14 10:44:30.301491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.541 [2024-07-14 10:44:30.304322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.541 [2024-07-14 10:44:30.313712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.541 [2024-07-14 10:44:30.314132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.541 [2024-07-14 10:44:30.314150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.541 [2024-07-14 10:44:30.314157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.541 [2024-07-14 10:44:30.314339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.541 [2024-07-14 10:44:30.314516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.541 [2024-07-14 10:44:30.314526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.541 [2024-07-14 10:44:30.314533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.541 [2024-07-14 10:44:30.317364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.541 [2024-07-14 10:44:30.326887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.541 [2024-07-14 10:44:30.327357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.541 [2024-07-14 10:44:30.327375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.541 [2024-07-14 10:44:30.327383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.541 [2024-07-14 10:44:30.327560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.541 [2024-07-14 10:44:30.327739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.541 [2024-07-14 10:44:30.327749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.541 [2024-07-14 10:44:30.327759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.541 [2024-07-14 10:44:30.330587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.541 [2024-07-14 10:44:30.339940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.541 [2024-07-14 10:44:30.340366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.541 [2024-07-14 10:44:30.340384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.541 [2024-07-14 10:44:30.340391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.541 [2024-07-14 10:44:30.340568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.541 [2024-07-14 10:44:30.340745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.541 [2024-07-14 10:44:30.340755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.541 [2024-07-14 10:44:30.340761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.542 [2024-07-14 10:44:30.343592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.542 [2024-07-14 10:44:30.353112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.542 [2024-07-14 10:44:30.353543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.542 [2024-07-14 10:44:30.353560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.542 [2024-07-14 10:44:30.353567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.542 [2024-07-14 10:44:30.353743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.542 [2024-07-14 10:44:30.353922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.542 [2024-07-14 10:44:30.353932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.542 [2024-07-14 10:44:30.353938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.542 [2024-07-14 10:44:30.356769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.542 [2024-07-14 10:44:30.366296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.542 [2024-07-14 10:44:30.366590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.542 [2024-07-14 10:44:30.366607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.542 [2024-07-14 10:44:30.366614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.542 [2024-07-14 10:44:30.366792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.542 [2024-07-14 10:44:30.366971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.542 [2024-07-14 10:44:30.366981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.542 [2024-07-14 10:44:30.366987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.542 [2024-07-14 10:44:30.369822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.542 [2024-07-14 10:44:30.379341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.542 [2024-07-14 10:44:30.379705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.542 [2024-07-14 10:44:30.379722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.542 [2024-07-14 10:44:30.379730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.542 [2024-07-14 10:44:30.379907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.542 [2024-07-14 10:44:30.380085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.542 [2024-07-14 10:44:30.380096] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.542 [2024-07-14 10:44:30.380104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.542 [2024-07-14 10:44:30.382938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.542 [2024-07-14 10:44:30.392458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.542 [2024-07-14 10:44:30.392804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.542 [2024-07-14 10:44:30.392821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.542 [2024-07-14 10:44:30.392828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.542 [2024-07-14 10:44:30.393005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.542 [2024-07-14 10:44:30.393184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.542 [2024-07-14 10:44:30.393194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.542 [2024-07-14 10:44:30.393201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.542 [2024-07-14 10:44:30.396032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.542 [2024-07-14 10:44:30.405549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.542 [2024-07-14 10:44:30.405847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.542 [2024-07-14 10:44:30.405864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.542 [2024-07-14 10:44:30.405871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.542 [2024-07-14 10:44:30.406048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.542 [2024-07-14 10:44:30.406232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.542 [2024-07-14 10:44:30.406242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.542 [2024-07-14 10:44:30.406248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.542 [2024-07-14 10:44:30.409075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.542 [2024-07-14 10:44:30.418600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.542 [2024-07-14 10:44:30.419003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.542 [2024-07-14 10:44:30.419020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.542 [2024-07-14 10:44:30.419028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.542 [2024-07-14 10:44:30.419209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.542 [2024-07-14 10:44:30.419394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.542 [2024-07-14 10:44:30.419404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.542 [2024-07-14 10:44:30.419410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.542 [2024-07-14 10:44:30.422241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.542 [2024-07-14 10:44:30.431764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.542 [2024-07-14 10:44:30.432181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.542 [2024-07-14 10:44:30.432199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.542 [2024-07-14 10:44:30.432207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.542 [2024-07-14 10:44:30.432392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.542 [2024-07-14 10:44:30.432570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.542 [2024-07-14 10:44:30.432580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.542 [2024-07-14 10:44:30.432586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.542 [2024-07-14 10:44:30.435417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.542 [2024-07-14 10:44:30.444927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.542 [2024-07-14 10:44:30.445345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.542 [2024-07-14 10:44:30.445363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.542 [2024-07-14 10:44:30.445371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.542 [2024-07-14 10:44:30.445548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.542 [2024-07-14 10:44:30.445725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.542 [2024-07-14 10:44:30.445735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.542 [2024-07-14 10:44:30.445741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.542 [2024-07-14 10:44:30.448573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.542 [2024-07-14 10:44:30.458097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.542 [2024-07-14 10:44:30.458462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.542 [2024-07-14 10:44:30.458480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.542 [2024-07-14 10:44:30.458487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.542 [2024-07-14 10:44:30.458665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.542 [2024-07-14 10:44:30.458844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.542 [2024-07-14 10:44:30.458853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.542 [2024-07-14 10:44:30.458864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.542 [2024-07-14 10:44:30.461694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.542 [2024-07-14 10:44:30.471211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.542 [2024-07-14 10:44:30.471558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.542 [2024-07-14 10:44:30.471574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.542 [2024-07-14 10:44:30.471582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.542 [2024-07-14 10:44:30.471759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.542 [2024-07-14 10:44:30.471938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.542 [2024-07-14 10:44:30.471948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.542 [2024-07-14 10:44:30.471955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.542 [2024-07-14 10:44:30.474783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.542 [2024-07-14 10:44:30.484297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.542 [2024-07-14 10:44:30.484688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.542 [2024-07-14 10:44:30.484705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.542 [2024-07-14 10:44:30.484712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.542 [2024-07-14 10:44:30.484888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.542 [2024-07-14 10:44:30.485067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.542 [2024-07-14 10:44:30.485076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.543 [2024-07-14 10:44:30.485083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.543 [2024-07-14 10:44:30.487916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.543 [2024-07-14 10:44:30.497431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.543 [2024-07-14 10:44:30.497726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.543 [2024-07-14 10:44:30.497743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.543 [2024-07-14 10:44:30.497750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.543 [2024-07-14 10:44:30.497927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.543 [2024-07-14 10:44:30.498106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.543 [2024-07-14 10:44:30.498116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.543 [2024-07-14 10:44:30.498122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.543 [2024-07-14 10:44:30.500949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.543 [2024-07-14 10:44:30.510467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.543 [2024-07-14 10:44:30.510931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.543 [2024-07-14 10:44:30.510953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.543 [2024-07-14 10:44:30.510960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.543 [2024-07-14 10:44:30.511137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.543 [2024-07-14 10:44:30.511321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.543 [2024-07-14 10:44:30.511332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.543 [2024-07-14 10:44:30.511338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.543 [2024-07-14 10:44:30.514166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.804 [2024-07-14 10:44:30.523531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.804 [2024-07-14 10:44:30.523981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.804 [2024-07-14 10:44:30.523998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.804 [2024-07-14 10:44:30.524005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.804 [2024-07-14 10:44:30.524183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.804 [2024-07-14 10:44:30.524369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.804 [2024-07-14 10:44:30.524379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.804 [2024-07-14 10:44:30.524385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.804 [2024-07-14 10:44:30.527207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.804 [2024-07-14 10:44:30.536728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.804 [2024-07-14 10:44:30.537076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.804 [2024-07-14 10:44:30.537094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.804 [2024-07-14 10:44:30.537101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.804 [2024-07-14 10:44:30.537283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.804 [2024-07-14 10:44:30.537461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.804 [2024-07-14 10:44:30.537471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.804 [2024-07-14 10:44:30.537477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.804 [2024-07-14 10:44:30.540303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.805 [2024-07-14 10:44:30.549814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.805 [2024-07-14 10:44:30.550207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.805 [2024-07-14 10:44:30.550230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.805 [2024-07-14 10:44:30.550237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.805 [2024-07-14 10:44:30.550414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.805 [2024-07-14 10:44:30.550596] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.805 [2024-07-14 10:44:30.550606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.805 [2024-07-14 10:44:30.550613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.805 [2024-07-14 10:44:30.553442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.805 [2024-07-14 10:44:30.562954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.805 [2024-07-14 10:44:30.563372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.805 [2024-07-14 10:44:30.563389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.805 [2024-07-14 10:44:30.563396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.805 [2024-07-14 10:44:30.563573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.805 [2024-07-14 10:44:30.563750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.805 [2024-07-14 10:44:30.563760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.805 [2024-07-14 10:44:30.563766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.805 [2024-07-14 10:44:30.566594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.805 [2024-07-14 10:44:30.576123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.805 [2024-07-14 10:44:30.576398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.805 [2024-07-14 10:44:30.576415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.805 [2024-07-14 10:44:30.576422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.805 [2024-07-14 10:44:30.576599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.805 [2024-07-14 10:44:30.576778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.805 [2024-07-14 10:44:30.576788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.805 [2024-07-14 10:44:30.576794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.805 [2024-07-14 10:44:30.579623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.805 [2024-07-14 10:44:30.589304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.805 [2024-07-14 10:44:30.589610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.805 [2024-07-14 10:44:30.589627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.805 [2024-07-14 10:44:30.589634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.805 [2024-07-14 10:44:30.589811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.805 [2024-07-14 10:44:30.589988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.805 [2024-07-14 10:44:30.589998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.805 [2024-07-14 10:44:30.590005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.805 [2024-07-14 10:44:30.592840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.805 [2024-07-14 10:44:30.602354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.805 [2024-07-14 10:44:30.602742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.805 [2024-07-14 10:44:30.602759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.805 [2024-07-14 10:44:30.602766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.805 [2024-07-14 10:44:30.602943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.805 [2024-07-14 10:44:30.603120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.805 [2024-07-14 10:44:30.603130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.805 [2024-07-14 10:44:30.603137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.805 [2024-07-14 10:44:30.605970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.805 [2024-07-14 10:44:30.615497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.805 [2024-07-14 10:44:30.615794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.805 [2024-07-14 10:44:30.615811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.805 [2024-07-14 10:44:30.615819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.805 [2024-07-14 10:44:30.615996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.805 [2024-07-14 10:44:30.616175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.805 [2024-07-14 10:44:30.616184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.805 [2024-07-14 10:44:30.616190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.805 [2024-07-14 10:44:30.619018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.805 [2024-07-14 10:44:30.628544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.805 [2024-07-14 10:44:30.628907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.805 [2024-07-14 10:44:30.628924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.805 [2024-07-14 10:44:30.628932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.805 [2024-07-14 10:44:30.629109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.805 [2024-07-14 10:44:30.629294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.805 [2024-07-14 10:44:30.629305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.805 [2024-07-14 10:44:30.629311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.805 [2024-07-14 10:44:30.632132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.805 [2024-07-14 10:44:30.641646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.805 [2024-07-14 10:44:30.641990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.805 [2024-07-14 10:44:30.642008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.805 [2024-07-14 10:44:30.642018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.805 [2024-07-14 10:44:30.642195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.805 [2024-07-14 10:44:30.642380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.805 [2024-07-14 10:44:30.642390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.805 [2024-07-14 10:44:30.642397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.805 [2024-07-14 10:44:30.645221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.805 [2024-07-14 10:44:30.654732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.805 [2024-07-14 10:44:30.655092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.805 [2024-07-14 10:44:30.655109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.805 [2024-07-14 10:44:30.655118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.805 [2024-07-14 10:44:30.655301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.805 [2024-07-14 10:44:30.655480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.805 [2024-07-14 10:44:30.655490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.805 [2024-07-14 10:44:30.655498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.805 [2024-07-14 10:44:30.658325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.805 [2024-07-14 10:44:30.667837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.805 [2024-07-14 10:44:30.668184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.805 [2024-07-14 10:44:30.668201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.805 [2024-07-14 10:44:30.668208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.805 [2024-07-14 10:44:30.668390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.805 [2024-07-14 10:44:30.668569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.805 [2024-07-14 10:44:30.668579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.805 [2024-07-14 10:44:30.668585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.805 [2024-07-14 10:44:30.671419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.805 [2024-07-14 10:44:30.680926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.805 [2024-07-14 10:44:30.681368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.805 [2024-07-14 10:44:30.681385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.805 [2024-07-14 10:44:30.681393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.805 [2024-07-14 10:44:30.681570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.806 [2024-07-14 10:44:30.681747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.806 [2024-07-14 10:44:30.681759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.806 [2024-07-14 10:44:30.681766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.806 [2024-07-14 10:44:30.684593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.806 [2024-07-14 10:44:30.694106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.806 [2024-07-14 10:44:30.694544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.806 [2024-07-14 10:44:30.694562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.806 [2024-07-14 10:44:30.694570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.806 [2024-07-14 10:44:30.694747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.806 [2024-07-14 10:44:30.694926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.806 [2024-07-14 10:44:30.694936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.806 [2024-07-14 10:44:30.694942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.806 [2024-07-14 10:44:30.697771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.806 10:44:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:45.806 10:44:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:35:45.806 10:44:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:45.806 10:44:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:45.806 10:44:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:45.806 [2024-07-14 10:44:30.707284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.806 [2024-07-14 10:44:30.707703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.806 [2024-07-14 10:44:30.707720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.806 [2024-07-14 10:44:30.707727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.806 [2024-07-14 10:44:30.707904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.806 [2024-07-14 10:44:30.708084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.806 [2024-07-14 10:44:30.708094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.806 [2024-07-14 10:44:30.708100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.806 [2024-07-14 10:44:30.710931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.806 [2024-07-14 10:44:30.720457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.806 [2024-07-14 10:44:30.720801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.806 [2024-07-14 10:44:30.720818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.806 [2024-07-14 10:44:30.720825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.806 [2024-07-14 10:44:30.721002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.806 [2024-07-14 10:44:30.721180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.806 [2024-07-14 10:44:30.721193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.806 [2024-07-14 10:44:30.721200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.806 [2024-07-14 10:44:30.724028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.806 [2024-07-14 10:44:30.733541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.806 [2024-07-14 10:44:30.733924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.806 [2024-07-14 10:44:30.733942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.806 [2024-07-14 10:44:30.733950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.806 [2024-07-14 10:44:30.734126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.806 [2024-07-14 10:44:30.734308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.806 [2024-07-14 10:44:30.734319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.806 [2024-07-14 10:44:30.734326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.806 [2024-07-14 10:44:30.737149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.806 10:44:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:45.806 10:44:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:45.806 10:44:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.806 10:44:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:45.806 [2024-07-14 10:44:30.743686] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:45.806 [2024-07-14 10:44:30.746670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.806 [2024-07-14 10:44:30.747039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.806 [2024-07-14 10:44:30.747056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.806 [2024-07-14 10:44:30.747064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.806 [2024-07-14 10:44:30.747244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.806 [2024-07-14 10:44:30.747422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.806 [2024-07-14 10:44:30.747432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.806 [2024-07-14 10:44:30.747438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.806 10:44:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.806 10:44:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:45.806 10:44:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.806 10:44:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:45.806 [2024-07-14 10:44:30.750266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.806 [2024-07-14 10:44:30.759780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.806 [2024-07-14 10:44:30.760213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.806 [2024-07-14 10:44:30.760235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.806 [2024-07-14 10:44:30.760246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.806 [2024-07-14 10:44:30.760424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.806 [2024-07-14 10:44:30.760603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.806 [2024-07-14 10:44:30.760614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.806 [2024-07-14 10:44:30.760620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.806 [2024-07-14 10:44:30.763444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.806 [2024-07-14 10:44:30.772959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.806 [2024-07-14 10:44:30.773357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.806 [2024-07-14 10:44:30.773374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:45.806 [2024-07-14 10:44:30.773383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:45.806 [2024-07-14 10:44:30.773559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:45.806 [2024-07-14 10:44:30.773738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.806 [2024-07-14 10:44:30.773747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.806 [2024-07-14 10:44:30.773754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.806 [2024-07-14 10:44:30.776580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.066 [2024-07-14 10:44:30.786099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.066 [2024-07-14 10:44:30.786475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.066 [2024-07-14 10:44:30.786495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:46.066 [2024-07-14 10:44:30.786503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:46.066 [2024-07-14 10:44:30.786680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:46.066 [2024-07-14 10:44:30.786860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.066 [2024-07-14 10:44:30.786870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.066 [2024-07-14 10:44:30.786877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.066 [2024-07-14 10:44:30.789710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.066 Malloc0 00:35:46.066 10:44:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.066 10:44:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:46.066 10:44:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.066 10:44:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:46.066 [2024-07-14 10:44:30.799226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.066 [2024-07-14 10:44:30.799523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.066 [2024-07-14 10:44:30.799540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:46.066 [2024-07-14 10:44:30.799548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:46.066 [2024-07-14 10:44:30.799729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:46.066 [2024-07-14 10:44:30.799909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.066 [2024-07-14 10:44:30.799919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.066 [2024-07-14 10:44:30.799925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.066 [2024-07-14 10:44:30.802754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.066 10:44:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.066 10:44:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:46.066 10:44:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.067 10:44:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:46.067 [2024-07-14 10:44:30.812257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.067 [2024-07-14 10:44:30.812673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.067 [2024-07-14 10:44:30.812690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec72d0 with addr=10.0.0.2, port=4420 00:35:46.067 [2024-07-14 10:44:30.812697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec72d0 is same with the state(5) to be set 00:35:46.067 [2024-07-14 10:44:30.812875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec72d0 (9): Bad file descriptor 00:35:46.067 10:44:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.067 [2024-07-14 10:44:30.813053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.067 [2024-07-14 10:44:30.813064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.067 [2024-07-14 10:44:30.813070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.067 10:44:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:46.067 10:44:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.067 10:44:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:46.067 [2024-07-14 10:44:30.815898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.067 [2024-07-14 10:44:30.816654] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:46.067 10:44:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.067 10:44:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2623257 00:35:46.067 [2024-07-14 10:44:30.825455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.067 [2024-07-14 10:44:30.859406] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:56.045 00:35:56.045 Latency(us) 00:35:56.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:56.045 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:56.045 Verification LBA range: start 0x0 length 0x4000 00:35:56.045 Nvme1n1 : 15.00 8088.54 31.60 12655.84 0.00 6150.48 445.22 14816.83 00:35:56.045 =================================================================================================================== 00:35:56.045 Total : 8088.54 31.60 12655.84 0.00 6150.48 445.22 14816.83 00:35:56.045 10:44:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:56.045 10:44:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:56.045 10:44:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.045 10:44:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:56.045 10:44:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.045 10:44:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:56.045 10:44:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:56.045 10:44:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:56.045 10:44:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:35:56.045 10:44:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:56.045 10:44:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:35:56.045 10:44:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:56.045 10:44:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:56.045 rmmod nvme_tcp 00:35:56.045 rmmod nvme_fabrics 00:35:56.045 rmmod nvme_keyring 00:35:56.045 10:44:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:56.045 10:44:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:35:56.045 10:44:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:35:56.045 10:44:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2624181 ']' 00:35:56.045 10:44:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2624181 00:35:56.045 10:44:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 2624181 ']' 00:35:56.045 10:44:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 2624181 00:35:56.045 10:44:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:35:56.046 10:44:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:56.046 10:44:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2624181 00:35:56.046 10:44:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:56.046 10:44:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:56.046 10:44:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2624181' 00:35:56.046 killing process with pid 2624181 00:35:56.046 10:44:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 2624181 00:35:56.046 10:44:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 2624181 00:35:56.046 10:44:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:56.046 10:44:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:56.046 10:44:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:56.046 10:44:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:56.046 10:44:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:56.046 10:44:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:56.046 10:44:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:56.046 10:44:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:56.983 10:44:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:56.983 00:35:56.983 real 0m26.367s 00:35:56.983 user 1m2.819s 00:35:56.983 sys 0m6.443s 00:35:56.983 10:44:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:56.983 10:44:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:56.983 ************************************ 00:35:56.983 END TEST nvmf_bdevperf 00:35:56.983 ************************************ 00:35:56.983 10:44:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:35:56.983 10:44:41 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:56.983 10:44:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:56.983 10:44:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:56.983 10:44:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:56.983 ************************************ 00:35:56.983 START TEST nvmf_target_disconnect 00:35:56.983 ************************************ 00:35:56.983 10:44:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:57.242 * Looking for test storage... 00:35:57.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:57.242 10:44:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:57.242 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:57.242 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:57.242 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:57.242 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:57.242 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:57.242 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:57.242 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:57.242 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:57.242 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:57.242 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:57.242 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:57.242 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:57.242 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:57.242 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:57.242 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:57.242 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:35:57.243 10:44:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:02.515 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:02.515 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:02.515 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:02.516 Found net devices under 0000:86:00.0: cvl_0_0 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:02.516 Found net devices under 0000:86:00.1: cvl_0_1 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:02.516 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:02.781 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:02.781 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:02.781 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:02.781 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:02.781 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:02.781 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:02.781 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:02.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:02.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:36:02.781 00:36:02.781 --- 10.0.0.2 ping statistics --- 00:36:02.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:02.781 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:36:02.781 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:02.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:02.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:36:02.781 00:36:02.781 --- 10.0.0.1 ping statistics --- 00:36:02.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:02.781 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:36:02.781 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:02.781 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:36:02.781 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:02.781 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:02.781 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:02.781 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:02.781 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:02.781 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:02.781 10:44:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:02.781 10:44:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:36:02.781 10:44:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:02.781 10:44:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:02.781 10:44:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:03.069 ************************************ 00:36:03.069 START TEST nvmf_target_disconnect_tc1 00:36:03.069 ************************************ 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:03.069 EAL: No free 2048 kB hugepages reported on node 1 00:36:03.069 [2024-07-14 10:44:47.887691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.069 [2024-07-14 10:44:47.887796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc07ab0 with addr=10.0.0.2, port=4420 00:36:03.069 [2024-07-14 10:44:47.887851] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:03.069 [2024-07-14 10:44:47.887885] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:03.069 [2024-07-14 10:44:47.887904] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:36:03.069 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:36:03.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:36:03.069 Initializing NVMe Controllers 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:03.069 00:36:03.069 real 0m0.116s 00:36:03.069 user 0m0.045s 00:36:03.069 sys 0m0.071s 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:03.069 ************************************ 00:36:03.069 END TEST nvmf_target_disconnect_tc1 00:36:03.069 ************************************ 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:03.069 ************************************ 00:36:03.069 START TEST nvmf_target_disconnect_tc2 00:36:03.069 ************************************ 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2629246 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2629246 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2629246 ']' 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:03.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:03.069 10:44:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:03.069 [2024-07-14 10:44:48.028698] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:36:03.069 [2024-07-14 10:44:48.028744] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:03.326 EAL: No free 2048 kB hugepages reported on node 1 00:36:03.326 [2024-07-14 10:44:48.102006] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:03.326 [2024-07-14 10:44:48.145411] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:03.326 [2024-07-14 10:44:48.145448] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:03.326 [2024-07-14 10:44:48.145455] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:03.326 [2024-07-14 10:44:48.145462] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:03.326 [2024-07-14 10:44:48.145467] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:03.326 [2024-07-14 10:44:48.145575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:36:03.326 [2024-07-14 10:44:48.145690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:36:03.326 [2024-07-14 10:44:48.145797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:36:03.326 [2024-07-14 10:44:48.145799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:36:03.891 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:03.891 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:36:03.891 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:03.891 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:03.891 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:03.891 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:03.891 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:03.891 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.891 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:04.150 Malloc0 00:36:04.150 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.150 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:04.150 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.150 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:04.150 [2024-07-14 10:44:48.896072] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:04.150 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.150 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:04.150 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.150 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:04.150 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.150 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:04.150 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.150 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:04.150 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.150 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:04.150 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.150 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:04.150 [2024-07-14 10:44:48.925123] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:04.150 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.150 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:04.150 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.150 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:04.150 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.150 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2629386 00:36:04.150 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:04.150 10:44:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:04.150 EAL: No free 2048 kB hugepages reported on node 1 00:36:06.061 10:44:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2629246 00:36:06.061 10:44:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:06.061 Read completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Read completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Read completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Read completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Read completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Read completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Read completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Read completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Read completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Read completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Write completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Write completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Read completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Write completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Read completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Read completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Read completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Read completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Write completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Read completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Write completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Read completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Read completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Write completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Write completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Read completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Read completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Write completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Read completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Read completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Read completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Read completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Read completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Read completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 Read completed with error (sct=0, sc=8) 00:36:06.061 starting I/O failed 00:36:06.061 [2024-07-14 10:44:50.952422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.061 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 [2024-07-14 10:44:50.952616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 [2024-07-14 10:44:50.952811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Read completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 Write completed with error (sct=0, sc=8) 00:36:06.062 starting I/O failed 00:36:06.062 [2024-07-14 10:44:50.952999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.062 [2024-07-14 10:44:50.953125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.062 [2024-07-14 10:44:50.953140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.062 qpair failed and we were unable to recover it. 00:36:06.062 [2024-07-14 10:44:50.953261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.062 [2024-07-14 10:44:50.953273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.062 qpair failed and we were unable to recover it. 00:36:06.062 [2024-07-14 10:44:50.953375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.062 [2024-07-14 10:44:50.953385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.062 qpair failed and we were unable to recover it. 00:36:06.062 [2024-07-14 10:44:50.953494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.062 [2024-07-14 10:44:50.953505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.062 qpair failed and we were unable to recover it. 00:36:06.062 [2024-07-14 10:44:50.953593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.062 [2024-07-14 10:44:50.953604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.062 qpair failed and we were unable to recover it. 00:36:06.062 [2024-07-14 10:44:50.953815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.062 [2024-07-14 10:44:50.953845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.062 qpair failed and we were unable to recover it. 00:36:06.062 [2024-07-14 10:44:50.954047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.062 [2024-07-14 10:44:50.954079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.062 qpair failed and we were unable to recover it. 00:36:06.062 [2024-07-14 10:44:50.954191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.062 [2024-07-14 10:44:50.954222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.062 qpair failed and we were unable to recover it. 00:36:06.062 [2024-07-14 10:44:50.954427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.954439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.955209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.955242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.955334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.955346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.955436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.955447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.955542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.955552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.955632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.955643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.955780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.955809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.955951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.955982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.956196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.956241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.956496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.956508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.956603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.956632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.956768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.956799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.956907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.956938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.957072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.957102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.957240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.957266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.957386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.957411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.957548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.957573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.957829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.957854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.958032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.958057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.958169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.958194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.958322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.958349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.958482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.958511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.958684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.958709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.958909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.958933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.959108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.959133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.959306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.959333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.959444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.959469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.959597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.959622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.959735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.959761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.959871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.959897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.960008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.960032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.960137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.960162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.960331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.960357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.960459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.960482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.960647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.960672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.960786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.960811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.960909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.960934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.961112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.961137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.961257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.961284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.961395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.961420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.063 qpair failed and we were unable to recover it. 00:36:06.063 [2024-07-14 10:44:50.961582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.063 [2024-07-14 10:44:50.961608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.961717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.961742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.961851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.961876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.961973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.961998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.962106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.962131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.962368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.962393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.962491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.962516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.962676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.962701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.962876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.962899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.963014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.963039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.963141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.963165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.963336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.963362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.963530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.963555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.963672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.963696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.963791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.963815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.963922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.963946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.964064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.964089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.964258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.964284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.964512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.964536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.964630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.964655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.964833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.964858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.964951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.964979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.965165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.965191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.965300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.965324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.965505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.965530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.965628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.965651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.965761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.965785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.965946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.965971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.966097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.966122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.966220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.966256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.966379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.966404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.966496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.966520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.966679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.966717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.966822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.966847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.966950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.966977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.967223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.967259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.967508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.967535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.967704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.967731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.967898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.967924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.968022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.968048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.968236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.064 [2024-07-14 10:44:50.968264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.064 qpair failed and we were unable to recover it. 00:36:06.064 [2024-07-14 10:44:50.968381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.968408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.968525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.968551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.968766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.968793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.969054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.969081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.969290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.969318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.969482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.969508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.969688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.969714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.969972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.969999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.970181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.970208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.970344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.970372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.970475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.970501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.970622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.970649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.970816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.970843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.970961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.970987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.971093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.971120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.971317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.971344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.971448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.971474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.971663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.971690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.971799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.971825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.971926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.971953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.972061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.972091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.972192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.972219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.972337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.972363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.972624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.972651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.972886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.972912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.973010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.973035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.973151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.973177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.973317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.973344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.973455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.973482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.973649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.973674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.973908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.973934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.974129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.974155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.974269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.974297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.974552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.974580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.974753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.974780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.974903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.974929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.975036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.975064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.975237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.975265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.975433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.975458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.065 qpair failed and we were unable to recover it. 00:36:06.065 [2024-07-14 10:44:50.975624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.065 [2024-07-14 10:44:50.975651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.975818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.975845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.975966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.975992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.976123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.976149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.976393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.976420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.976606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.976632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.976764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.976790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.976922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.976952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.977065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.977097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.977289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.977320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.977507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.977538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.977753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.977783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.977992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.978023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.978164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.978196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.978387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.978421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.978598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.978629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.978828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.978859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.979058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.979089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.979282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.979315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.979492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.979522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.979698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.979728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.979938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.979974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.980186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.980217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.980335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.980366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.980637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.980668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.980843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.980874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.981074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.981105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.981295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.981327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.981591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.981623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.981742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.981773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.981945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.981976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.982175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.982206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.982436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.982468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.982724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.982755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.982969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.983000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.066 qpair failed and we were unable to recover it. 00:36:06.066 [2024-07-14 10:44:50.983216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.066 [2024-07-14 10:44:50.983259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.983448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.983479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.983657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.983688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.983818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.983850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.983968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.983999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.984175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.984207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.984457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.984489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.984676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.984707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.984883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.984914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.985118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.985149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.985339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.985372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.985492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.985524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.985722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.985753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.985934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.985966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.986146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.986177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.986383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.986415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.986547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.986578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.986774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.986805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.986929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.986960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.987206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.987246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.987421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.987452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.987647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.987678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.987923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.987954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.988210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.988249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.988370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.988400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.988521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.988552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.988841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.988876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.989066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.989097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.989248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.989281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.989463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.989494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.989638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.989669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.989794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.989826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.990005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.990037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.990215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.990270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.990536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.990568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.990840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.990871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.991013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.991043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.991330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.991362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.991574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.991606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.991797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.991828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.992106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.067 [2024-07-14 10:44:50.992137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.067 qpair failed and we were unable to recover it. 00:36:06.067 [2024-07-14 10:44:50.992404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.992436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.992567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.992599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.992843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.992874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.993010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.993041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.993218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.993258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.993446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.993478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.993601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.993632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.993742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.993773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.993905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.993936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.994157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.994187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.994369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.994401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.994516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.994547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.994732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.994763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.994937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.994968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.995154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.995184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.995391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.995423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.995615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.995646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.995889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.995920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.996134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.996165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.996362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.996395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.996511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.996541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.996765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.996794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.996927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.996957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.997167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.997199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.997449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.997480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.997730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.997766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.997903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.997934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.998123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.998153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.998336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.998370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.998543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.998573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.998752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.998783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.999052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.999083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.999274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.999306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.999431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.999462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.999667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.999698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:50.999944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:50.999975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:51.000153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:51.000183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:51.000319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:51.000350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:51.000483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:51.000515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:51.000760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:51.000791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.068 [2024-07-14 10:44:51.000915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.068 [2024-07-14 10:44:51.000946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.068 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.001190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.001221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.001405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.001436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.001552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.001583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.001853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.001884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.002067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.002098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.002245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.002278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.002467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.002497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.002669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.002700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.002830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.002862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.002972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.003003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.003199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.003247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.003597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.003668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.003881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.003915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.004105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.004137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.004335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.004370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.004585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.004616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.004794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.004825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.004961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.004992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.005430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.005465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.005737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.005769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.005969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.005999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.006250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.006292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.006540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.006570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.006679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.006711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.006980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.007012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.007223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.007270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.007473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.007504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.007630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.007662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.007946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.007977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.008239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.008271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.008514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.008545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.008675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.008707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.008898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.008929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.009120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.009151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.009273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.009306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.009479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.009511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.009705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.009736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.009938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.009969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.010158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.010196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.010349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.010385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.069 [2024-07-14 10:44:51.010499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.069 [2024-07-14 10:44:51.010531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.069 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.010745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.010776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.010968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.010999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.011179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.011211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.011431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.011463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.011636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.011667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.011911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.011942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.012058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.012090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.012202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.012241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.012508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.012539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.012728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.012759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.012930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.012961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.013185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.013217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.013501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.013533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.013722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.013753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.013996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.014027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.014212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.014264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.014462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.014493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.014624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.014655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.014848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.014879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.015122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.015153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.015282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.015315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.015588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.015619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.015865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.015896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.016000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.016031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.016173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.016210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.016509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.016540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.016833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.016864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.017112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.017143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.017322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.017354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.017534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.017565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.017752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.017783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.017915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.017946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.018134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.018165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.018299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.018334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.018529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.018560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.070 [2024-07-14 10:44:51.018757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.070 [2024-07-14 10:44:51.018788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.070 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.018998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.019029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.019217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.019268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.019403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.019435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.019572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.019604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.019744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.019776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.019973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.020005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.020135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.020167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.020363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.020395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.020525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.020555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.020675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.020705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.020897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.020929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.021106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.021138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.021260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.021293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.021414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.021446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.021634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.021665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.021909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.021945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.022141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.022173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.022357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.022389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.022584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.022615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.022803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.022833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.023013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.023044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.023288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.023322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.023536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.023566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.023769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.023801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.024051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.024083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.024263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.024295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.024429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.024461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.024597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.024628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.024817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.024848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.025022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.025091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.025253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.025290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.025485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.025516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.025729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.025761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.025956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.025988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.026180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.026211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.026498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.026530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.026739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.026771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.027013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.027043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.027155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.027185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.027448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.027480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.071 [2024-07-14 10:44:51.027666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.071 [2024-07-14 10:44:51.027697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.071 qpair failed and we were unable to recover it. 00:36:06.072 [2024-07-14 10:44:51.027918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.072 [2024-07-14 10:44:51.027949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.072 qpair failed and we were unable to recover it. 00:36:06.072 [2024-07-14 10:44:51.028065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.072 [2024-07-14 10:44:51.028102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.072 qpair failed and we were unable to recover it. 00:36:06.072 [2024-07-14 10:44:51.028291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.072 [2024-07-14 10:44:51.028321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.072 qpair failed and we were unable to recover it. 00:36:06.072 [2024-07-14 10:44:51.028531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.072 [2024-07-14 10:44:51.028563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.072 qpair failed and we were unable to recover it. 00:36:06.072 [2024-07-14 10:44:51.028754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.072 [2024-07-14 10:44:51.028785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.072 qpair failed and we were unable to recover it. 00:36:06.072 [2024-07-14 10:44:51.029015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.072 [2024-07-14 10:44:51.029046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.072 qpair failed and we were unable to recover it. 00:36:06.072 [2024-07-14 10:44:51.029305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.072 [2024-07-14 10:44:51.029337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.072 qpair failed and we were unable to recover it. 00:36:06.072 [2024-07-14 10:44:51.029514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.072 [2024-07-14 10:44:51.029544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.072 qpair failed and we were unable to recover it. 00:36:06.072 [2024-07-14 10:44:51.029755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.072 [2024-07-14 10:44:51.029786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.072 qpair failed and we were unable to recover it. 00:36:06.072 [2024-07-14 10:44:51.029904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.072 [2024-07-14 10:44:51.029934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.072 qpair failed and we were unable to recover it. 00:36:06.072 [2024-07-14 10:44:51.030144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.072 [2024-07-14 10:44:51.030174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.072 qpair failed and we were unable to recover it. 00:36:06.072 [2024-07-14 10:44:51.030391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.072 [2024-07-14 10:44:51.030423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.072 qpair failed and we were unable to recover it. 00:36:06.072 [2024-07-14 10:44:51.030557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.072 [2024-07-14 10:44:51.030588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.072 qpair failed and we were unable to recover it. 00:36:06.072 [2024-07-14 10:44:51.030816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.072 [2024-07-14 10:44:51.030847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.072 qpair failed and we were unable to recover it. 00:36:06.072 [2024-07-14 10:44:51.031098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.072 [2024-07-14 10:44:51.031130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.072 qpair failed and we were unable to recover it. 00:36:06.072 [2024-07-14 10:44:51.031322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.072 [2024-07-14 10:44:51.031354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.072 qpair failed and we were unable to recover it. 00:36:06.072 [2024-07-14 10:44:51.031600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.072 [2024-07-14 10:44:51.031631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.072 qpair failed and we were unable to recover it. 00:36:06.072 [2024-07-14 10:44:51.031854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.072 [2024-07-14 10:44:51.031885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.072 qpair failed and we were unable to recover it. 00:36:06.072 [2024-07-14 10:44:51.032086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.072 [2024-07-14 10:44:51.032117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.072 qpair failed and we were unable to recover it. 00:36:06.359 [2024-07-14 10:44:51.032244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-07-14 10:44:51.032275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-07-14 10:44:51.032464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-07-14 10:44:51.032494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-07-14 10:44:51.032616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-07-14 10:44:51.032646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-07-14 10:44:51.032833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-07-14 10:44:51.032864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-07-14 10:44:51.032991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-07-14 10:44:51.033022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-07-14 10:44:51.033201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-07-14 10:44:51.033243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-07-14 10:44:51.033373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-07-14 10:44:51.033403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-07-14 10:44:51.033645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-07-14 10:44:51.033675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-07-14 10:44:51.033861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-07-14 10:44:51.033892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-07-14 10:44:51.034109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-07-14 10:44:51.034141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-07-14 10:44:51.034293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-07-14 10:44:51.034325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-07-14 10:44:51.034518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-07-14 10:44:51.034550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-07-14 10:44:51.034744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-07-14 10:44:51.034775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-07-14 10:44:51.034951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-07-14 10:44:51.034982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-07-14 10:44:51.035131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-07-14 10:44:51.035162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-07-14 10:44:51.035283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-07-14 10:44:51.035314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-07-14 10:44:51.035491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-07-14 10:44:51.035522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-07-14 10:44:51.035643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-07-14 10:44:51.035673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-07-14 10:44:51.035851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-07-14 10:44:51.035882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.036018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.036049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.036292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.036323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.036439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.036470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.036676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.036712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.036910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.036941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.037182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.037213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.037416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.037448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.037648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.037679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.037871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.037902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.038098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.038130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.038337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.038368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.038511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.038542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.038683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.038714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.038891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.038921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.039052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.039083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.039192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.039223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.039369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.039400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.039530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.039562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.039744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.039774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.039958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.039989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.040165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.040196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.040408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.040440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.040555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.040585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.040694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.040725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.040848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.040879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.041146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.041177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.041333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.041365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.041565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.041596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.041798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.041829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.042078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.042109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.042309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.042342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.042563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.042594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.042788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.042819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.043028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.043059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.043327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.043359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.043477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.043508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.043799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.043829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.044034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.044065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.044258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.044290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-07-14 10:44:51.044424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-07-14 10:44:51.044455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.044647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.044678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.044951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.044982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.045195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.045233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.045357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.045393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.045592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.045623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.045800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.045831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.046081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.046112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.046239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.046271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.046469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.046500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.046695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.046727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.046860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.046891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.047069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.047100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.047235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.047267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.047513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.047544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.047789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.047821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.047966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.047997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.048173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.048205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.048407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.048439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.048654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.048685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.048807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.048838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.049081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.049111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.049294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.049327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.049514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.049545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.049725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.049756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.049945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.049976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.050172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.050202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.050407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.050439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.050617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.050648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.050831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.050862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.051054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.051085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.051270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.051303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.051499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.051530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.051795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.051826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.051951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.051983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.052171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.052202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.052404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.052435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.052633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.052664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.052910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.052941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.053190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.053220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.053440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.053472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-07-14 10:44:51.053594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-07-14 10:44:51.053625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.053868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.053899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.054024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.054055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.054267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.054303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.054494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.054525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.054761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.054792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.054939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.054969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.055109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.055140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.055271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.055303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.055507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.055538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.055731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.055762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.055873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.055902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.056047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.056078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.056190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.056221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.056433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.056464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.056688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.056719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.057004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.057034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.057166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.057197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.057336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.057367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.057560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.057591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.057727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.057758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.058012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.058043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.058243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.058275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.058489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.058520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.058695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.058726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.058932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.058963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.059205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.059244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.059522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.059553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.059767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.059798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.059909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.059941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.060139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.060171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.060475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.060507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.060771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.060802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.061082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.061112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.061319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.061351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.061477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.061508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.061719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.061751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.061996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.062027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.062322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.062355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.062485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.062515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.062758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-07-14 10:44:51.062790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-07-14 10:44:51.062905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.062937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.063130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.063162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.063347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.063379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.063574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.063604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.063777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.063809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.064052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.064083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.064264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.064296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.064536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.064568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.064809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.064840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.065024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.065055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.065245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.065277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.065394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.065426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.065613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.065644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.065769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.065800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.065909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.065940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.066061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.066092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.066223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.066262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.066386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.066417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.066607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.066638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.066814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.066845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.067107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.067138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.067325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.067357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.067489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.067521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.067641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.067673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.067868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.067899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.068165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.068196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.068378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.068410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.068622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.068653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.068781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.068812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.069000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.069036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.069319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.069352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.069558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.069590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.069838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.069869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.070010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.070041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-07-14 10:44:51.070251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-07-14 10:44:51.070283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.070415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.070446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.070585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.070616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.070805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.070836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.071100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.071130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.071306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.071338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.071453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.071483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.071657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.071688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.071935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.071966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.072152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.072183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.072319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.072352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.072477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.072508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.072755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.072786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.072893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.072924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.073190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.073221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.073413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.073444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.073710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.073740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.074001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.074033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.074276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.074309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.074566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.074597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.074784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.074815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.075030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.075060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.075280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.075313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.075491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.075522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.075729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.075760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.075981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.076012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.076258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.076289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.076486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.076517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.076739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.076770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.076953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.076985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.077185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.077215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.077411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.077441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.077572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.077603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.077845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.077876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.077991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.078022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.078144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.078180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.078386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.078418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.078656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.078687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.078820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.078851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.079093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.079125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.079312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-07-14 10:44:51.079345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-07-14 10:44:51.079518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.079548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.079791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.079822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.080016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.080047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.080264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.080297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.080433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.080464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.080718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.080748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.080991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.081021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.081205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.081243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.081492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.081523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.081646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.081675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.081915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.081946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.082147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.082177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.082374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.082406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.082611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.082642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.082817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.082848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.083114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.083145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.083274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.083305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.083485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.083516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.083761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.083791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.083934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.083965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.084151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.084183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.084398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.084430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.084603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.084634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.084761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.084792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.084970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.085000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.085179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.085210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.085341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.085373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.085513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.085543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.085751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.085782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.085892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.085923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.086062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.086093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.086337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.086368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.086559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.086589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.086789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.086820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.087011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.087047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.087194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.087234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.087420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.087452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.087663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.087694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.087883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.087914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.088203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.088258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.088472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-07-14 10:44:51.088503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-07-14 10:44:51.088678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.088709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.088903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.088934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.089061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.089092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.089282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.089314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.089494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.089525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.089791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.089822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.090008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.090038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.090233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.090265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.090436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.090467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.090660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.090692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.090885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.090916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.091096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.091127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.091322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.091354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.091460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.091491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.091668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.091700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.091881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.091912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.092104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.092135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.092311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.092342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.092468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.092500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.092620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.092652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.092910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.092941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.093119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.093151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.093355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.093387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.093572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.093603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.093782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.093813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.094002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.094033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.094211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.094251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.094519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.094551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.094728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.094759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.095027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.095058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.095304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.095335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.095520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.095551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.095738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.095770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.095899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.095936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.096136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.096168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.096428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.096461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.096583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.096614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.096739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.096771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.096947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.096978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.097170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.097201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.097488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-07-14 10:44:51.097520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-07-14 10:44:51.097710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.097743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.097881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.097912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.098122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.098154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.098341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.098374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.098550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.098581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.098766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.098797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.098943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.098975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.099099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.099131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.099255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.099287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.099412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.099444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.099586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.099617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.099826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.099857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.100037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.100069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.100187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.100217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.100456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.100487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.100747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.100778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.100920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.100951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.101127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.101158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.101400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.101432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.101698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.101729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.101928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.101959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.102207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.102249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.102451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.102481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.102676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.102706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.102892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.102923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.103176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.103206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.103402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.103434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.103612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.103643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.103910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.103941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.104056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.104087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.104298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.104330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.104452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.104483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.104683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.104719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.104920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.104951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.105198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.105236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.105436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.105467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.105654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.105684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.105792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.105823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.105951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.105982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.106124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.106156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-07-14 10:44:51.106412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-07-14 10:44:51.106444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.106622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.106652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.106831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.106863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.107066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.107098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.107223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.107261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.107502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.107532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.107714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.107746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.107987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.108017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.108144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.108175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.108387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.108419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.108682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.108713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.108897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.108927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.109169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.109200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.109394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.109425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.109633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.109665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.109767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.109796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.109983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.110014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.110239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.110271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.110533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.110564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.110827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.110858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.111006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.111037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.111168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.111200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.111418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.111449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.111638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.111669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.111792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.111823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.111966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.111997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.112127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.112159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.112331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.112363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.112557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.112589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.112868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.112899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.113012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.113043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.113185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.113216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.113365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.113401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.113529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.113559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.113760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.113791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.113982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.114013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-07-14 10:44:51.114257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-07-14 10:44:51.114288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.114530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.114560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.114742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.114772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.115038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.115069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.115198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.115236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.115411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.115443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.115574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.115606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.115846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.115877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.116014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.116045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.116158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.116189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.116448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.116519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.116727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.116761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.117006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.117038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.117320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.117353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.117466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.117497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.117625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.117656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.117852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.117882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.118095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.118126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.118321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.118353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.118548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.118579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.118768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.118799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.118918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.118949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.119086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.119118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.119313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.119357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.119538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.119568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.119781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.119812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.120004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.120035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.120271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.120303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.120487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.120517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.120648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.120680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.120875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.120906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.121019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.121049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.121293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.121326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.121438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.121469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.121659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.121690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.121900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.121931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.122128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.122159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.122301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.122333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.122467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.122498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.122703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.122734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.122930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.122961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-07-14 10:44:51.123152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-07-14 10:44:51.123183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.123325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.123360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.123650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.123680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.123820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.123851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.123971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.124002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.124212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.124257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.124435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.124467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.124673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.124704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.124897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.124929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.125114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.125150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.125395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.125428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.125616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.125647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.125832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.125863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.126052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.126083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.126208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.126248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.126397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.126429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.126694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.126726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.126920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.126951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.127209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.127257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.127459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.127491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.127734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.127766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.127891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.127922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.128106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.128137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.128292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.128324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.128572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.128604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.128786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.128816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.129061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.129091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.129197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.129239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.129485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.129516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.129784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.129815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.130010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.130041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.130251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.130283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.130462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.130493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.130605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.130637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.130835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.130866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.131051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.131082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.131263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.131304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.131511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.131543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.131669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.131700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.131894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.131926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.132058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.132088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.132277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-07-14 10:44:51.132308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-07-14 10:44:51.132423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.132454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.132657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.132688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.132915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.132946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.133073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.133104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.133305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.133337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.133459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.133490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.133694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.133725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.133867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.133898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.134217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.134302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.134501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.134535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.134732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.134765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.134954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.134986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.135191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.135223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.135436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.135468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.135684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.135715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.135914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.135945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.136064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.136095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.136348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.136381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.136567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.136598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.136790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.136821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.137012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.137043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.137222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.137271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.137462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.137493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.137678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.137709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.137895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.137926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.138120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.138152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.138343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.138375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.138553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.138584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.138716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.138748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.138883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.138913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.139130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.139161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.139296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.139327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.139457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.139489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.139616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.139647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.139823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.139854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.140127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.140158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.140295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.140329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.140462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.140493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.140627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.140658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.140863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.140895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.141177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-07-14 10:44:51.141208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-07-14 10:44:51.141355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.141387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.141578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.141609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.141887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.141918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.142114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.142145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.142387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.142419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.142654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.142686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.142871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.142903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.143102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.143134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.143344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.143375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.143573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.143604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.143710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.143742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.143929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.143960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.144107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.144138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.144265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.144298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.144543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.144573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.144716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.144747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.144992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.145023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.145223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.145265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.145479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.145510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.145634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.145665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.145920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.145961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.146081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.146112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.146241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.146273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.146469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.146501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.146672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.146704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.146880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.146912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.147101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.147133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.147328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.147361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.147493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.147525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.147701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.147732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.147977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.148008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.148136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.148168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.148346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.148378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.148562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.148593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.148721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-07-14 10:44:51.148753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-07-14 10:44:51.149034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.149065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.149254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.149286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.149421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.149452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.149657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.149688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.149861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.149893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.150086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.150117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.150307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.150339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.150524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.150555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.150762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.150793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.150975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.151006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.151206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.151247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.151451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.151482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.151659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.151691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.151893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.151925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.152170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.152200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.152482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.152514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.152806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.152836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.153079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.153110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.153293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.153325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.153567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.153599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.153866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.153897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.154074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.154105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.154248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.154281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.154458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.154489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.154707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.154738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.154995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.155032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.155175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.155206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.155352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.155383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.155588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.155619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.155731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.155762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.155956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.155987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.156125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.156157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.156337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.156369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.156632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.156663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.156861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.156892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.157134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.157166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.157365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.157397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.157616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.157647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.157914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.157944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.158073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.158104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.158299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-07-14 10:44:51.158331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-07-14 10:44:51.158538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.158569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.158770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.158801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.158991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.159022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.159199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.159253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.159380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.159412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.159595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.159625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.159837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.159869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.160012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.160044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.160243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.160275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.160457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.160489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.160797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.160829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.161094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.161126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.161268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.161299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.161482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.161513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.161638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.161669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.161863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.161895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.162136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.162168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.162364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.162397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.162602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.162633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.162841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.162872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.162994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.163025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.163244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.163276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.163383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.163414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.163552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.163583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.163777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.163813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.163956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.163987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.164101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.164132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.164376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.164409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.164565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.164596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.164866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.164897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.165178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.165209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.165405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.165436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.165612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.165644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.165780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.165811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.166079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.166110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.166355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.166387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.166514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.166545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.166657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.166688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.166912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.166943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.167140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.167171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.167349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-07-14 10:44:51.167382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-07-14 10:44:51.167579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.167611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.167897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.167928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.168170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.168201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.168471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.168503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.168771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.168802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.169025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.169056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.169252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.169284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.169461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.169492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.169689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.169720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.169849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.169880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.170067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.170098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.170279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.170311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.170587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.170619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.170821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.170852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.171108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.171138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.171313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.171344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.171523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.171554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.171730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.171760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.171897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.171928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.172101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.172132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.172306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.172339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.172527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.172558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.172822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.172854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.172995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.173031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.173215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.173256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.173504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.173535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.173729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.173759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.173954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.173985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.174237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.174269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.174457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.174487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.174671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.174702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.174956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.174987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.175197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.175250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.175375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.175406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.175584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.175615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.175747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.175777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.175952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.175984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.176104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.176135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.176352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.176385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.176579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.176610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.176799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-07-14 10:44:51.176831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-07-14 10:44:51.177095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.177126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.177383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.177415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.177551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.177583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.177835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.177866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.178059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.178090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.178285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.178317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.178512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.178543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.178718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.178749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.178989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.179020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.179203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.179242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.179532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.179564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.179762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.179793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.179990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.180021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.180201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.180241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.180369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.180400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.180574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.180604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.180789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.180820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.180965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.180997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.181113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.181143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.181262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.181295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.181537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.181569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.181693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.181724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.181972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.182002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.182265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.182297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.182472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.182503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.182746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.182777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.182921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.182953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.183219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.183274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.183402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.183432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.183569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.183601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.183726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.183758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.183935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.183966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.184146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.184178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.184374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.184405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.184599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.184630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.184744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-07-14 10:44:51.184776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-07-14 10:44:51.184912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.184944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.185069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.185100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.185366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.185399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.185584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.185614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.185799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.185830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.186075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.186106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.186360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.186393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.186660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.186691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.186887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.186918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.187035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.187067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.187263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.187295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.187479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.187510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.187687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.187718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.187838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.187874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.188075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.188106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.188298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.188330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.188528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.188560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.188761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.188792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.188900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.188931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.189199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.189239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.189388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.189419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.189624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.189656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.189767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.189798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.189989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.190020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.190264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.190296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.190489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.190520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.190648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.190679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.190884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.190915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.191035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.191066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.191261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.191294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.191475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.191507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.191776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.191807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.191930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.191961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.192083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.192114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.192358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.192391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.192499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.192530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.192652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.192683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.192926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.192956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.193132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.193163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.193435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.193467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-07-14 10:44:51.193675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-07-14 10:44:51.193705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.193913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.193944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.194122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.194153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.194347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.194380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.194578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.194609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.194719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.194750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.195011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.195043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.195237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.195269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.195446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.195477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.195730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.195761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.195981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.196011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.196201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.196240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.196436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.196467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.196640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.196676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.196877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.196908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.197092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.197123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.197311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.197343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.197595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.197627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.197814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.197845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.197967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.197998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.198261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.198294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.198512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.198543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.198655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.198686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.198816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.198847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.198957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.198989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.199176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.199207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.199479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.199510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.199627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.199658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.199837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.199868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.200053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.200085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.200328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.200374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.200517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.200549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.200857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.200888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.201162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.201193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.201420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.201451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.201716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.201747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.201934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.201965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.202174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.202205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.202350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.202382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.202520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.202550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.202761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.202793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-07-14 10:44:51.202918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-07-14 10:44:51.202949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.203191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.203222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.203419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.203451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.203661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.203691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.203939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.203970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.204089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.204120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.204323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.204355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.204545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.204576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.204772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.204803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.204984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.205015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.205148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.205178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.205374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.205405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.205600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.205637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.205766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.205796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.205907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.205938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.206138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.206169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.206365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.206396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.206523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.206554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.206681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.206712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.206897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.206928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.207048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.207079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.207271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.207302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.207429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.207460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.207731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.207763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.207959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.207990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.208175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.208206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.208405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.208437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.208611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.208641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.208905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.208935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.209156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.209186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.209444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.209476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.209597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.209628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.209814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.209844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.209977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.210008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.210280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.210313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.210518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.210550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.210684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.210715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.210854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.210885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.211065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.211094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.211316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.211349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.211558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.211588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-07-14 10:44:51.211712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-07-14 10:44:51.211743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.211927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.211957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.212237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.212268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.212472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.212503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.212631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.212662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.212789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.212819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.213034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.213065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.213263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.213294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.213503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.213534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.213645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.213676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.213950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.213980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.214156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.214192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.214400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.214431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.214630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.214660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.214783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.214814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.215007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.215037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.215237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.215269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.215383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.215414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.215556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.215586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.215792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.215823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.216016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.216046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.216157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.216187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.216392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.216424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.216637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.216669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.216847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.216878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.217002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.217033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.217213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.217253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.217373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.217404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.217525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.217555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.217680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.217710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.217837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.217868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.217991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.218022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.218201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.218249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.218362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.218393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.218597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.218628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.218754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.218784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-07-14 10:44:51.218903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-07-14 10:44:51.218933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.219107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.219138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.219325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.219359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.219549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.219580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.219772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.219803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.220006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.220037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.220246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.220277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.220413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.220444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.220638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.220669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.220854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.220885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.221066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.221097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.221235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.221267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.221405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.221436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.221565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.221596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.221795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.221827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.221955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.221991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.222101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.222132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.222329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.222361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.222552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.222583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.222710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.222741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.222988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.223018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.223135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.223167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.223302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.223335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.223511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.223542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.223670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.223700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.223828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.223859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.224036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.224066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.224196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.224236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.224535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.224567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.224698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.224729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.224842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.224873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.225077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.225107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.225312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.225345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.225470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.225502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.225695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.225725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.225845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.225876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.226079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.226110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.226247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.226279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.226460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.226492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.226604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.226633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.226823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.226854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-07-14 10:44:51.227035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-07-14 10:44:51.227066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.227364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.227396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.227574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.227605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.227785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.227815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.228051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.228082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.228359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.228390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.228578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.228609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.228794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.228825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.228954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.228985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.229166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.229197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.229477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.229508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.229618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.229648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.229915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.229946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.230145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.230176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.230457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.230495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.230620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.230651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.230789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.230822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.231085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.231115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.231313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.231346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.231587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.231618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.231737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.231768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.232008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.232040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.232204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.232244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.232360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.232390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.232533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.232564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.232702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.232733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.232862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.232893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.233006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.233037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.233223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.233263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.233450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.233480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.233599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.233629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.233848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.233879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.234009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.234040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.234165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.234196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.234330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.234363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.234541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.234571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.234696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.234727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.234903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.234934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.235146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.235177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.235385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.235416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.235607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.235638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-07-14 10:44:51.235834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-07-14 10:44:51.235865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.236045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.236076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.236265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.236297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.236428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.236459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.236657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.236689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.236865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.236895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.237011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.237041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.237250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.237282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.237474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.237505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.237621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.237652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.237832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.237862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.237989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.238020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.238167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.238197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.238390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.238427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.238672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.238702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.238837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.238868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.239135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.239166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.239343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.239376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.239505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.239536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.239712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.239743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.239923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.239954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.240155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.240186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.240385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.240417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.240525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.240556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.240743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.240774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.240977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.241008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.241197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.241238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.241487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.241519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.241693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.241723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.241851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.241882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.242173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.242204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.242361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.242393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.242646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.242676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.242797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.242829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.242958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.242988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.243171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.243202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.243357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.243388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.243519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.243551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.243686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.243716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.243928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.243958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.244141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.244171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-07-14 10:44:51.244361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-07-14 10:44:51.244393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.244511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.244542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.244720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.244750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.244877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.244908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.245124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.245155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.245340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.245372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.245667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.245698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.245832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.245863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.247138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.247186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.247487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.247521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.247710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.247741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.247936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.247967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.248102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.248140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.248350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.248381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.248568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.248598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.248811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.248842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.249023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.249054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.249250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.249282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.249460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.249492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.249685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.249715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.249851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.249881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.250016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.250048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.250271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.250304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.250499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.250531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.250728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.250761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.250963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.250995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.251207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.251248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.251381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.251412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.251529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.251559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.251702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.251732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.251839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.251871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.252004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.252035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.252158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.252188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.252380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.252411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.252635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.252666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.252789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.252820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.253008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.253038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.253162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.253193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.253349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.253381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.253555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.253587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.253717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-07-14 10:44:51.253747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-07-14 10:44:51.253877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.253908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.254104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.254135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.254265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.254298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.254428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.254459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.254724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.254756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.254976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.255006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.255186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.255217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.255368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.255400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.255593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.255624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.255815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.255846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.255960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.255993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.256110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.256147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.256290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.256322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.256504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.256534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.256723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.256754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.256931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.256963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.257082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.257112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.257290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.257322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.257440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.257471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.257581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.257611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.257746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.257777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.257892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.257923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.258045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.258076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.258189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.258219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.258445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.258477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.258591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.258622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.258765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.258796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.258977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.259008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.259189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.259220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.259358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.259390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.259503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.259534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.259716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.259747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.259887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.259917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.260105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.260136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.260267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.260300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-07-14 10:44:51.260480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-07-14 10:44:51.260511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.260632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.260662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.260791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.260822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.260946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.260977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.261114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.261145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.261273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.261305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.261438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.261469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.261652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.261684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.261800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.261831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.262010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.262040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.262184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.262216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.262345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.262377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.262506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.262536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.262718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.262749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.262866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.262898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.263033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.263065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.263193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.263237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.263415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.263446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.263645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.263676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.263796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.263827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.263954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.263985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.264109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.264139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.264256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.264288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.264476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.264506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.264616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.264647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.264826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.264856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.265001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.265031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.265146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.265177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.265299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.265331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.265465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.265496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.265696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.265728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.265854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.265884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.266005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.266035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.266152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.266182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.266317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.266349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.266467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.266497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.266696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.266728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.266847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.266878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.267093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.267123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.267252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.267284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.267466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.267497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-07-14 10:44:51.267612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-07-14 10:44:51.267642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.267758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.267788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.267915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.267947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.268122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.268153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.268280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.268313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.268487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.268518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.268766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.268796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.268920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.268950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.269063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.269093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.269278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.269309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.269448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.269479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.269594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.269625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.269796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.269827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.269965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.269996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.270098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.270129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.270251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.270288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.270492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.270522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.270732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.270764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.270947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.270978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.271104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.271135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.271272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.271304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.271427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.271458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.271573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.271602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.271714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.271745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.271876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.271907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.272028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.272059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.272185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.272216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.272419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.272451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.272628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.272658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.272778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.272809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.273018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.273048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.273237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.273269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.273397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.273428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.273573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.273604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.273711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.273742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.273920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.273952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.274060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.274089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.274202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.274269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.274387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.274418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.274523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.274554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.274662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.274692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.387 [2024-07-14 10:44:51.274806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.387 [2024-07-14 10:44:51.274837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.387 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.275019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.275050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.275156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.275187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.275329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.275361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.275473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.275503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.275682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.275712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.275820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.275851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.275975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.276006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.276180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.276211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.276400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.276431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.276627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.276658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.276780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.276810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.277056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.277087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.277271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.277303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.277416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.277452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.277660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.277691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.277805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.277836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.278024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.278054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.278255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.278287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.278467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.278498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.278616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.278647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.278790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.278821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.279013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.279044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.279167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.279198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.279333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.279364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.279543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.279574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.279686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.279717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.279932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.279962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.280094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.280125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.280242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.280274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.280457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.280487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.280669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.280700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.280880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.280911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.281086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.281117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.281265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.281297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.281490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.281521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.281710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.281740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.281938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.281969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.282093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.282124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.282267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.282299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.282423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.388 [2024-07-14 10:44:51.282455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.388 qpair failed and we were unable to recover it. 00:36:06.388 [2024-07-14 10:44:51.282633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2db60 is same with the state(5) to be set 00:36:06.388 [2024-07-14 10:44:51.282905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.282976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.283132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.283167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.283302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.283336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.283521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.283553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.283740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.283772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.283900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.283931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.284046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.284078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.284208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.284252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.284495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.284527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.284643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.284674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.284863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.284894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.285012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.285043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.285175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.285206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.285421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.285465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.285589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.285621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.285863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.285894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.286020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.286051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.286238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.286271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.286467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.286498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.286613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.286644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.286756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.286789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.286967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.286997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.287186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.287216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.287341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.287373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.287565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.287596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.287784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.287815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.287930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.287961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.288148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.288179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.288387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.288420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.288602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.288633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.288758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.288789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.288913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.288944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.289142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.289173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.289382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.289417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.289618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.289650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.289774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.389 [2024-07-14 10:44:51.289805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.389 qpair failed and we were unable to recover it. 00:36:06.389 [2024-07-14 10:44:51.289980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.290010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.290276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.290310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.290430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.290460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.290593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.290624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.290850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.290892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.291014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.291044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.291242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.291274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.291527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.291559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.291734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.291765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.292030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.292061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.292185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.292216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.292498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.292531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.292711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.292742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.292866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.292897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.293136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.293168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.293447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.293483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.293603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.293635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.293829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.293860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.294001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.294032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.294212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.294262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.294388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.294419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.294534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.294565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.294834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.294865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.294987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.295018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.295205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.295246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.295366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.295398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.295618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.295650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.295944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.295975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.296185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.296216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.296460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.296493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.296611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.296641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.296751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.296788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.296983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.297014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.297245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.297289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.297485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.297516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.297714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.297745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.297855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.297885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.298007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.298037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.298146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.298178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.298408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.298441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.298567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.298599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.390 [2024-07-14 10:44:51.298728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.390 [2024-07-14 10:44:51.298761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.390 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.298881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.298913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.299155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.299186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.299396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.299428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.299553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.299584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.299775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.299806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.300096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.300127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.300342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.300375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.300644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.300675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.300854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.300885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.301022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.301053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.301177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.301209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.301368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.301402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.301585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.301617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.301744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.301775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.301959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.301990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.302184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.302216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.302466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.302497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.302693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.302724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.302912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.302943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.303133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.303164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.303351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.303383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.303518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.303549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.303687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.303718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.303844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.303875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.304144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.304175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.304405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.304437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.304562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.304593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.304774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.304805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.305006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.305036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.305169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.305200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.305433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.305468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.305585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.305617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.305836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.305869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.306066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.306097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.306292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.306325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.306462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.306493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.306607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.306636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.306888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.306919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.307131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.307162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.307290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.307322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.307497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.391 [2024-07-14 10:44:51.307528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.391 qpair failed and we were unable to recover it. 00:36:06.391 [2024-07-14 10:44:51.307714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.392 [2024-07-14 10:44:51.307745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.392 qpair failed and we were unable to recover it. 00:36:06.392 [2024-07-14 10:44:51.307862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.392 [2024-07-14 10:44:51.307894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.392 qpair failed and we were unable to recover it. 00:36:06.392 [2024-07-14 10:44:51.308110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.392 [2024-07-14 10:44:51.308141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.392 qpair failed and we were unable to recover it. 00:36:06.392 [2024-07-14 10:44:51.308351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.392 [2024-07-14 10:44:51.308384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.392 qpair failed and we were unable to recover it. 00:36:06.392 [2024-07-14 10:44:51.308501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.392 [2024-07-14 10:44:51.308532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.392 qpair failed and we were unable to recover it. 00:36:06.392 [2024-07-14 10:44:51.308642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.392 [2024-07-14 10:44:51.308673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.392 qpair failed and we were unable to recover it. 00:36:06.392 [2024-07-14 10:44:51.308865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.392 [2024-07-14 10:44:51.308897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.392 qpair failed and we were unable to recover it. 00:36:06.392 [2024-07-14 10:44:51.309051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.392 [2024-07-14 10:44:51.309082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.392 qpair failed and we were unable to recover it. 00:36:06.392 [2024-07-14 10:44:51.309209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.392 [2024-07-14 10:44:51.309256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.392 qpair failed and we were unable to recover it. 00:36:06.392 [2024-07-14 10:44:51.309476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.392 [2024-07-14 10:44:51.309508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.392 qpair failed and we were unable to recover it. 00:36:06.392 [2024-07-14 10:44:51.309655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.392 [2024-07-14 10:44:51.309686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.392 qpair failed and we were unable to recover it. 00:36:06.392 [2024-07-14 10:44:51.309887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.392 [2024-07-14 10:44:51.309918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.392 qpair failed and we were unable to recover it. 00:36:06.392 [2024-07-14 10:44:51.310042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.392 [2024-07-14 10:44:51.310073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.392 qpair failed and we were unable to recover it. 00:36:06.392 [2024-07-14 10:44:51.310264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.392 [2024-07-14 10:44:51.310297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.392 qpair failed and we were unable to recover it. 00:36:06.392 [2024-07-14 10:44:51.310410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.392 [2024-07-14 10:44:51.310441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.392 qpair failed and we were unable to recover it. 00:36:06.392 [2024-07-14 10:44:51.310585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.392 [2024-07-14 10:44:51.310616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.392 qpair failed and we were unable to recover it. 00:36:06.671 [2024-07-14 10:44:51.310805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-07-14 10:44:51.310843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-07-14 10:44:51.311023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-07-14 10:44:51.311052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-07-14 10:44:51.311269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-07-14 10:44:51.311299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-07-14 10:44:51.311421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-07-14 10:44:51.311450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-07-14 10:44:51.311569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-07-14 10:44:51.311599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-07-14 10:44:51.311818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-07-14 10:44:51.311849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-07-14 10:44:51.312033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-07-14 10:44:51.312063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-07-14 10:44:51.312271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-07-14 10:44:51.312303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-07-14 10:44:51.312444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-07-14 10:44:51.312475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-07-14 10:44:51.312664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-07-14 10:44:51.312695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-07-14 10:44:51.312842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-07-14 10:44:51.312873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-07-14 10:44:51.313061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-07-14 10:44:51.313092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.313297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.313332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.313465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.313496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.313746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.313777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.313995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.314025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.314222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.314261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.314440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.314471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.314652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.314683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.314807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.314838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.314979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.315010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.315190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.315220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.315430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.315461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.315583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.315615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.315789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.315820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.315940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.315971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.316088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.316119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.316249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.316287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.316483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.316514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.316689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.316720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.316930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.316961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.317090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.317121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.317250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.317289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.317401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.317432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.317626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.317657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.317900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.317931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.318200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.318260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.318439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.318470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.318651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.318683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.318949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.318979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.319168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.319199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.319394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.319425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.319544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.319575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.319707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.319738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.319874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.319904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.320082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.320112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.320238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.320270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.320407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.320438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.320621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.320652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.320835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.320867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.321000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.321031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.321222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.321276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-07-14 10:44:51.321466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-07-14 10:44:51.321497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.321741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.321772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.321952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.321983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.322114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.322145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.322393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.322425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.322611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.322643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.322838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.322868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.323053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.323084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.323351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.323383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.323636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.323667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.323855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.323886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.324066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.324097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.324376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.324409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.324519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.324551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.324677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.324709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.324886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.324918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.325153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.325241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.325539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.325575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.325826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.325857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.326091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.326122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.326249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.326282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.326526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.326556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.326818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.326849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.326980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.327010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.327184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.327215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.327394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.327425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.327612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.327643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.327827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.327858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.328044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.328076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.328266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.328307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.328499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.328530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.328705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.328736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.328881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.328913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.329096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.329127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.329374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.329406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.329602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.329633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.329746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.329777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.329957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.329988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.330110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.330141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.330330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.330362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.330548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.330579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-07-14 10:44:51.330710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-07-14 10:44:51.330741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.330854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.330885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.331078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.331110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.331222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.331263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.331389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.331420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.331705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.331736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.331912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.331943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.332119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.332149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.332267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.332298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.332427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.332458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.332648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.332678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.332811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.332842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.333057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.333088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.333211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.333253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.333449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.333480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.333708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.333775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.333981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.334016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.334142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.334172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.334313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.334345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.334522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.334553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.334750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.334787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.335056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.335086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.335209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.335250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.335384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.335415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.335713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.335745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.335881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.335912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.336041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.336072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.336200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.336242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.336437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.336477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.336604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.336635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.336904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.336935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.337173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.337204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.337417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.337447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.337580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.337610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.337736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.337771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.337893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.337924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.338059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.338090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.338240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.338273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.338452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.338483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.338702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.338734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.338911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.338942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-07-14 10:44:51.339085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-07-14 10:44:51.339117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.339245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.339278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.339549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.339580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.339697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.339727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.339915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.339946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.340077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.340108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.340242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.340274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.340491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.340521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.340697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.340728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.340930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.340961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.341150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.341181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.341334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.341366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.341545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.341576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.341775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.341806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.341992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.342024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.342279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.342313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.342524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.342555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.342762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.342792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.343039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.343070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.343360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.343392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.343586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.343617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.343799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.343830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.344114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.344145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.344385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.344416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.344543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.344575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.344864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.344895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.345078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.345109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.345297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.345335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.345521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.345551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.345669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.345700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.345820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.345852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.346030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.346061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.346208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.346251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.346480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.346511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.346686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.346717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-07-14 10:44:51.346923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.675 [2024-07-14 10:44:51.346954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.347148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.347179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.347466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.347499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.347624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.347655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.347830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.347861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.348032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.348063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.348175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.348206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.348500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.348532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.348794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.348824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.349017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.349048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.349240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.349272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.349404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.349435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.349577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.349608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.349735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.349766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.349952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.349983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.350112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.350143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.350435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.350467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.350590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.350621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.350750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.350780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.350895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.350926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.351130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.351161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.351440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.351472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.351657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.351688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.351819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.351850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.352046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.352077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.352284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.352316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.352586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.352617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.352756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.352787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.353052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.353084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.353216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.353255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.353524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.353554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.353696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.353726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.353919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.353950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.354156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.354187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.354483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.354515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.354701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.354731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.354962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.354993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.355189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.355220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.355434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.355465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.355638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.355669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-07-14 10:44:51.355816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.676 [2024-07-14 10:44:51.355847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.356019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.356050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.356166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.356197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.356384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.356416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.356556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.356587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.356722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.356753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.356889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.356920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.357099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.357130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.357324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.357356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.357538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.357569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.357683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.357714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.357986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.358017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.358129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.358160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.358296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.358328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.358509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.358539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.358806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.358837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.358969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.359000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.359243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.359275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.359390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.359421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.359610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.359646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.359785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.359816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.359939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.359970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.360217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.360255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.360383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.360413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.360598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.360628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.360824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.360854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.361055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.361086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.361262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.361295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.361487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.361518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.361734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.361766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.361889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.361920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.362099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.362130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.362250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.362283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.362486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.362518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.362650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.362681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.362859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.362889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.363018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.363049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.363181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.363212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.363389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.363420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.363550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.363580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.363771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.363802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.677 [2024-07-14 10:44:51.363994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.677 [2024-07-14 10:44:51.364025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.677 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.364200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.364237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.364433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.364463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.364593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.364624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.364811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.364842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.365025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.365055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.365255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.365287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.365469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.365500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.365609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.365640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.365837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.365868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.366049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.366080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.366348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.366380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.366493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.366523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.366704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.366735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.366975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.367006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.367150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.367180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.367372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.367404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.367699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.367730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.367925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.367961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.368232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.368264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.368376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.368407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.368586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.368616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.368742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.368773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.368905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.368935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.369056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.369087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.369212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.369249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.369370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.369401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.369589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.369620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.369908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.369940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.370068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.370098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.370364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.370396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.370609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.370641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.370865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.370896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.371071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.371102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.371291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.371325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.371460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.371491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.371607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.371638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.371820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.371850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.372029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.372060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.372350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.372383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.372507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.372539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.678 qpair failed and we were unable to recover it. 00:36:06.678 [2024-07-14 10:44:51.372732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.678 [2024-07-14 10:44:51.372763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.372948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.372980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.373166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.373197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.373466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.373498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.373754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.373785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.373936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.373966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.374218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.374259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.374394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.374425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.374553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.374584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.374770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.374801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.374987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.375017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.375284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.375316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.375518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.375548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.375741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.375772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.375909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.375939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.376222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.376261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.376381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.376411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.376655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.376690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.376819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.376850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.377025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.377055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.377177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.377208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.377392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.377422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.377712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.377743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.377867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.377897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.378072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.378103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.378289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.378321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.378499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.378529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.378712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.378743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.378918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.378949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.379127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.379159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.379299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.379332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.379478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.379509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.379691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.379722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.379923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.379954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.380186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.380216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.380434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.380466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.380722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.380754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.380971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.381002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.381208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.381248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.381459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.381490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.381688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.679 [2024-07-14 10:44:51.381719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.679 qpair failed and we were unable to recover it. 00:36:06.679 [2024-07-14 10:44:51.382002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.382033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.382263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.382295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.382473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.382505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.382761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.382792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.382926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.382956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.383146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.383177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.383303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.383335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.383580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.383610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.383902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.383932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.384123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.384154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.384295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.384327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.384588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.384618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.384826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.384857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.385155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.385187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.385544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.385575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.385762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.385792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.385970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.386009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.386132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.386163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.386296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.386329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.386455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.386485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.386605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.386636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.386769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.386799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.386915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.386945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.387137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.387167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.387293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.387325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.387433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.387464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.387587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.387617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.387861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.387892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.388030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.388061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.388258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.388290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.388491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.388522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.388715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.388745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.388963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.388994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.389193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.389233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.389415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.389446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.389634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.680 [2024-07-14 10:44:51.389665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.680 qpair failed and we were unable to recover it. 00:36:06.680 [2024-07-14 10:44:51.389778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.389808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.389980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.390011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.390202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.390247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.390465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.390495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.390677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.390707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.390915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.390945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.391118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.391148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.391287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.391319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.391609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.391640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.391884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.391914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.392044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.392075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.392272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.392303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.392553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.392583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.392713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.392745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.392866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.392897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.393084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.393115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.393237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.393268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.393465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.393495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.393684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.393715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.393842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.393873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.394048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.394084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.394218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.394269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.394484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.394515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.394711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.394741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.394960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.394991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.395198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.395236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.395427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.395457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.395734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.395764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.395877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.395908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.396147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.396178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.396313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.396345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.396588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.396619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.396796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.396826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.397013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.397044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.397309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.397342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.397514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.397546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.397670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.397702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.397963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.397993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.398205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.398243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.398486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.398517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.398766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.681 [2024-07-14 10:44:51.398797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.681 qpair failed and we were unable to recover it. 00:36:06.681 [2024-07-14 10:44:51.398921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.398951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.399140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.399170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.399388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.399419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.399606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.399638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.399858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.399889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.400022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.400053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.400264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.400296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.400540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.400570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.400817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.400849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.401029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.401060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.401177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.401208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.401335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.401366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.401561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.401592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.401796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.401826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.402017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.402048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.402167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.402198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.402489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.402521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.402642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.402673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.402867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.402898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.403085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.403121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.403249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.403281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.403414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.403444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.403565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.403595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.403740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.403771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.403966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.403997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.404179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.404211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.404408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.404439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.404575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.404606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.404867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.404899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.405077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.405108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.405292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.405323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.405441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.405472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.405579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.405609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.405855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.405887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.406011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.406042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.406159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.406189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.406305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.406336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.406476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.406506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.406682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.406713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.406834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.406865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.407083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.682 [2024-07-14 10:44:51.407114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.682 qpair failed and we were unable to recover it. 00:36:06.682 [2024-07-14 10:44:51.407233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.407265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.407448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.407479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.407684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.407715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.407839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.407870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.408067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.408098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.408281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.408313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.408516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.408547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.408687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.408718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.408890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.408921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.409033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.409063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.409247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.409278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.409472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.409504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.409690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.409720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.409844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.409874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.410060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.410092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.410282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.410314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.410530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.410561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.410690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.410720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.410988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.411024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.411223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.411264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.411455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.411486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.411686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.411717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.411956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.411985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.412120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.412150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.412334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.412367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.412585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.412616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.412813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.412844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.412966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.412996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.413172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.413203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.413387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.413419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.413599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.413630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.413743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.413772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.413964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.413995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.414116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.414147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.414334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.414365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.414493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.414525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.414656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.414687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.414792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.414822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.414950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.414982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.415173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.415204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.415412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.683 [2024-07-14 10:44:51.415444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.683 qpair failed and we were unable to recover it. 00:36:06.683 [2024-07-14 10:44:51.415585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.415616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.415800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.415830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.416089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.416119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.416300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.416332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.416579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.416611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.416876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.416907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.417098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.417128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.417305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.417336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.417521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.417553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.417680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.417711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.417904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.417935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.418036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.418067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.418175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.418206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.418358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.418390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.418631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.418662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.418907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.418938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.419109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.419140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.419379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.419417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.419593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.419626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.419755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.419786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.419912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.419943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.420053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.420082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.420189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.420220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.420443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.420475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.420609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.420640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.420780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.420812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.420999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.421030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.421243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.421274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.421482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.421513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.421624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.421655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.421839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.421870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.422083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.422113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.422243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.422275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.422400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.422431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.422619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.422650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.684 [2024-07-14 10:44:51.422891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.684 [2024-07-14 10:44:51.422922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.684 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.423096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.423128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.423249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.423281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.423480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.423510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.423725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.423755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.423974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.424005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.424260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.424291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.424478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.424509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.424723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.424754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.424948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.424979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.425102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.425132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.425311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.425343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.425519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.425550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.425741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.425774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.425899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.425930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.426119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.426150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.426341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.426373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.426584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.426615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.426825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.426856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.427034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.427065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.427252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.427285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.427479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.427510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.427707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.427748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.427990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.428022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.428291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.428323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.428498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.428528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.428726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.428756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.428958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.428989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.429175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.429207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.429398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.429429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.429562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.429592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.429745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.429775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.429974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.430004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.430209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.430248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.430444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.430475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.430604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.430634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.430764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.430795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.430970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.431001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.431183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.431214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.431354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.431386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.431519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.431549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.685 [2024-07-14 10:44:51.431742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.685 [2024-07-14 10:44:51.431773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.685 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.431906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.431938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.432082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.432114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.432288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.432320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.432530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.432561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.432746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.432779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.432958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.432989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.433188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.433220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.433475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.433507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.433713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.433743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.433923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.433954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.434098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.434129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.434321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.434352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.434549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.434580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.434715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.434745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.434934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.434965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.435237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.435269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.435515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.435546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.435723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.435755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.435884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.435915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.436035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.436066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.436249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.436287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.436460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.436491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.436763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.436793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.437005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.437035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.437212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.437249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.437364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.437396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.437583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.437614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.437748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.437779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.437962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.437992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.438251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.438284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.438481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.438513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.438623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.438654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.438850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.438881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.438996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.439026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.439222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.439277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.439524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.439554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.439690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.439720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.439858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.439889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.440001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.440031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.440216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.440257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.686 qpair failed and we were unable to recover it. 00:36:06.686 [2024-07-14 10:44:51.440496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.686 [2024-07-14 10:44:51.440527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.440702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.440733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.440852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.440882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.441067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.441098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.441274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.441307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.441512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.441542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.441819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.441850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.442169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.442251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.442422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.442456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.442673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.442704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.442897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.442929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.443054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.443086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.443275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.443307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.443534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.443565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.443747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.443778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.443994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.444025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.444234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.444266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.444394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.444425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.444539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.444571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.444702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.444733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.444925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.444965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.445175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.445207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.445416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.445449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.445582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.445613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.445745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.445777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.445920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.445952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.446195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.446236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.446360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.446391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.446521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.446552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.446747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.446778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.446953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.446984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.447113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.447145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.447337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.447369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.447555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.447586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.447789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.447820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.447940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.447971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.448096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.448128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.448317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.448348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.448565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.448597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.448732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.448764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.448944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.448975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.687 qpair failed and we were unable to recover it. 00:36:06.687 [2024-07-14 10:44:51.449194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.687 [2024-07-14 10:44:51.449232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.688 qpair failed and we were unable to recover it. 00:36:06.688 [2024-07-14 10:44:51.449411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.688 [2024-07-14 10:44:51.449442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.688 qpair failed and we were unable to recover it. 00:36:06.688 [2024-07-14 10:44:51.449550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.688 [2024-07-14 10:44:51.449581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.688 qpair failed and we were unable to recover it. 00:36:06.688 [2024-07-14 10:44:51.449768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.688 [2024-07-14 10:44:51.449800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.688 qpair failed and we were unable to recover it. 00:36:06.688 [2024-07-14 10:44:51.449990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.688 [2024-07-14 10:44:51.450021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.688 qpair failed and we were unable to recover it. 00:36:06.688 [2024-07-14 10:44:51.450145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.688 [2024-07-14 10:44:51.450177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.688 qpair failed and we were unable to recover it. 00:36:06.688 [2024-07-14 10:44:51.450307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.688 [2024-07-14 10:44:51.450339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.688 qpair failed and we were unable to recover it. 00:36:06.688 [2024-07-14 10:44:51.450516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.688 [2024-07-14 10:44:51.450547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.688 qpair failed and we were unable to recover it. 00:36:06.688 [2024-07-14 10:44:51.450734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.688 [2024-07-14 10:44:51.450764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.688 qpair failed and we were unable to recover it. 00:36:06.688 [2024-07-14 10:44:51.450964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.688 [2024-07-14 10:44:51.450996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.688 qpair failed and we were unable to recover it. 00:36:06.688 [2024-07-14 10:44:51.451102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.688 [2024-07-14 10:44:51.451133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.688 qpair failed and we were unable to recover it. 00:36:06.688 [2024-07-14 10:44:51.451317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.688 [2024-07-14 10:44:51.451350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.688 qpair failed and we were unable to recover it. 00:36:06.688 [2024-07-14 10:44:51.451540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.688 [2024-07-14 10:44:51.451571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.688 qpair failed and we were unable to recover it. 00:36:06.688 [2024-07-14 10:44:51.451693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.688 [2024-07-14 10:44:51.451724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.688 qpair failed and we were unable to recover it. 00:36:06.688 [2024-07-14 10:44:51.451848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.688 [2024-07-14 10:44:51.451879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.688 qpair failed and we were unable to recover it. 00:36:06.688 [2024-07-14 10:44:51.452008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.688 [2024-07-14 10:44:51.452040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.688 qpair failed and we were unable to recover it. 00:36:06.688 [2024-07-14 10:44:51.452184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.688 [2024-07-14 10:44:51.452215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.688 qpair failed and we were unable to recover it. 00:36:06.688 [2024-07-14 10:44:51.452441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.688 [2024-07-14 10:44:51.452473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.688 qpair failed and we were unable to recover it. 00:36:06.688 [2024-07-14 10:44:51.452718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.688 [2024-07-14 10:44:51.452749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.688 qpair failed and we were unable to recover it. 00:36:06.688 [2024-07-14 10:44:51.453063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.688 [2024-07-14 10:44:51.453099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.688 qpair failed and we were unable to recover it. 00:36:06.688 [2024-07-14 10:44:51.453388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.688 [2024-07-14 10:44:51.453420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.688 qpair failed and we were unable to recover it. 00:36:06.688 [2024-07-14 10:44:51.453689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.688 [2024-07-14 10:44:51.453719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.688 qpair failed and we were unable to recover it. 00:36:06.688 [2024-07-14 10:44:51.453845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.688 [2024-07-14 10:44:51.453876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.688 qpair failed and we were unable to recover it. 00:36:06.688 [2024-07-14 10:44:51.454064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.688 [2024-07-14 10:44:51.454095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.688 qpair failed and we were unable to recover it. 00:36:06.688 [2024-07-14 10:44:51.454242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.688 [2024-07-14 10:44:51.454274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.688 qpair failed and we were unable to recover it. 00:36:06.688 [2024-07-14 10:44:51.454396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.688 [2024-07-14 10:44:51.454426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.688 qpair failed and we were unable to recover it. 00:36:06.688 [2024-07-14 10:44:51.454615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.688 [2024-07-14 10:44:51.454646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.689 qpair failed and we were unable to recover it. 00:36:06.689 [2024-07-14 10:44:51.454824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.689 [2024-07-14 10:44:51.454855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.689 qpair failed and we were unable to recover it. 00:36:06.689 [2024-07-14 10:44:51.454977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.689 [2024-07-14 10:44:51.455008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.689 qpair failed and we were unable to recover it. 00:36:06.689 [2024-07-14 10:44:51.455147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.689 [2024-07-14 10:44:51.455178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.689 qpair failed and we were unable to recover it. 00:36:06.689 [2024-07-14 10:44:51.455330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.689 [2024-07-14 10:44:51.455362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.689 qpair failed and we were unable to recover it. 00:36:06.689 [2024-07-14 10:44:51.455548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.689 [2024-07-14 10:44:51.455579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.689 qpair failed and we were unable to recover it. 00:36:06.689 [2024-07-14 10:44:51.455711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.689 [2024-07-14 10:44:51.455742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.689 qpair failed and we were unable to recover it. 00:36:06.689 [2024-07-14 10:44:51.455948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.689 [2024-07-14 10:44:51.455979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.689 qpair failed and we were unable to recover it. 00:36:06.689 [2024-07-14 10:44:51.456088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.689 [2024-07-14 10:44:51.456119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.689 qpair failed and we were unable to recover it. 00:36:06.689 [2024-07-14 10:44:51.456300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.689 [2024-07-14 10:44:51.456333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.689 qpair failed and we were unable to recover it. 00:36:06.689 [2024-07-14 10:44:51.456451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.689 [2024-07-14 10:44:51.456482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.689 qpair failed and we were unable to recover it. 00:36:06.689 [2024-07-14 10:44:51.456681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.689 [2024-07-14 10:44:51.456712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.689 qpair failed and we were unable to recover it. 00:36:06.689 [2024-07-14 10:44:51.456891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.689 [2024-07-14 10:44:51.456922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.689 qpair failed and we were unable to recover it. 00:36:06.689 [2024-07-14 10:44:51.457131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.689 [2024-07-14 10:44:51.457162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.689 qpair failed and we were unable to recover it. 00:36:06.689 [2024-07-14 10:44:51.457341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.689 [2024-07-14 10:44:51.457373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.689 qpair failed and we were unable to recover it. 00:36:06.689 [2024-07-14 10:44:51.457586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.689 [2024-07-14 10:44:51.457617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.689 qpair failed and we were unable to recover it. 00:36:06.689 [2024-07-14 10:44:51.457810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.689 [2024-07-14 10:44:51.457841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.689 qpair failed and we were unable to recover it. 00:36:06.689 [2024-07-14 10:44:51.457974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.689 [2024-07-14 10:44:51.458005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.689 qpair failed and we were unable to recover it. 00:36:06.689 [2024-07-14 10:44:51.458190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.689 [2024-07-14 10:44:51.458235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.689 qpair failed and we were unable to recover it. 00:36:06.689 [2024-07-14 10:44:51.458415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.689 [2024-07-14 10:44:51.458446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.689 qpair failed and we were unable to recover it. 00:36:06.689 [2024-07-14 10:44:51.458719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.689 [2024-07-14 10:44:51.458789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.689 qpair failed and we were unable to recover it. 00:36:06.689 [2024-07-14 10:44:51.458999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.689 [2024-07-14 10:44:51.459033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.689 qpair failed and we were unable to recover it. 00:36:06.689 [2024-07-14 10:44:51.459160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.689 [2024-07-14 10:44:51.459192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.689 qpair failed and we were unable to recover it. 00:36:06.689 [2024-07-14 10:44:51.459339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.689 [2024-07-14 10:44:51.459373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.689 qpair failed and we were unable to recover it. 00:36:06.689 [2024-07-14 10:44:51.459560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.689 [2024-07-14 10:44:51.459592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.689 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.459774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.459805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.459996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.460027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.460247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.460280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.460529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.460560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.460688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.460719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.460843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.460874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.461052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.461084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.461278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.461311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.461426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.461458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.461683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.461714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.461957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.461988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.462166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.462197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.462328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.462363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.462568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.462602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.462876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.462907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.463083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.463114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.463223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.463267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.463408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.463440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.463622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.463653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.463798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.463829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.464017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.464048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.464170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.464202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.464405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.464443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.464560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.464591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.464850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.464882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.465061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.465091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.465286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.465318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.465448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.465479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.465686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.465718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.465829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.465861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.466119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.466150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.466338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.466373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.466514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.466545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.466728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.466759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.466940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.466971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.467145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.467177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.467308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.467340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.467523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.467555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.467672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.467703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.467886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.467917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.468043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.690 [2024-07-14 10:44:51.468075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.690 qpair failed and we were unable to recover it. 00:36:06.690 [2024-07-14 10:44:51.468218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.468258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.468382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.468414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.468611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.468643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.468825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.468856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.468976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.469008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.469182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.469219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.469410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.469441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.469719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.469751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.469939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.469975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.470080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.470110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.470220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.470276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.470422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.470452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.470564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.470595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.470716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.470748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.470872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.470904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.471155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.471187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.471387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.471421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.471600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.471631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.471812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.471843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.471965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.471996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.472245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.472277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.472452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.472483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.472702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.472734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.472925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.472956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.473132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.473163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.473292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.473324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.473450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.473481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.473678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.473709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.473831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.473862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.473977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.474008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.474194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.474255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.474520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.474552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.474692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.474724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.474920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.474951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.475132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.475162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.475287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.475318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.475464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.475496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.475677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.475708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.475892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.475926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.476110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.476141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.476407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.476439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.691 qpair failed and we were unable to recover it. 00:36:06.691 [2024-07-14 10:44:51.476617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.691 [2024-07-14 10:44:51.476647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.476776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.476807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.476949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.476980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.477171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.477202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.477354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.477385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.477513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.477544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.477652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.477683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.477948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.477979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.478169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.478200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.478343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.478377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.478559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.478590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.478711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.478742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.478862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.478893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.479072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.479103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.479294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.479327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.479529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.479561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.479762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.479793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.479913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.479945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.480069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.480101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.480289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.480322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.480569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.480600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.480720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.480752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.480887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.480918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.481177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.481208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.481339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.481370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.481548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.481579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.481776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.481808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.482072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.482103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.482216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.482268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.482466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.482498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.482648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.482679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.482860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.482891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.483022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.483054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.483259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.483291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.483463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.483494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.483621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.483658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.483850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.483882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.484057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.484088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.484213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.484256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.484502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.484534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.484741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.484772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.692 [2024-07-14 10:44:51.484895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.692 [2024-07-14 10:44:51.484926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.692 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.485180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.485211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.485339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.485370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.485614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.485645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.485787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.485818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.485940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.485971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.486236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.486280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.486469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.486500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.486618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.486649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.486914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.486944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.487077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.487109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.487288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.487321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.487503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.487534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.487724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.487754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.487863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.487894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.488093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.488126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.488261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.488293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.488408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.488439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.488581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.488612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.488789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.488819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.488995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.489027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.489284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.489322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.489453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.489485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.489658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.489689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.489815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.489846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.489983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.490014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.490132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.490163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.490340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.490374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.490557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.490587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.490780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.490811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.490929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.490959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.491136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.491168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.491379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.491412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.491521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.491551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.491761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.491792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.492013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.492044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.492328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.693 [2024-07-14 10:44:51.492360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.693 qpair failed and we were unable to recover it. 00:36:06.693 [2024-07-14 10:44:51.492554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.492586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.492849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.492880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.493079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.493110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.493299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.493331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.493513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.493544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.493675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.493706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.493892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.493923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.494045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.494076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.494196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.494242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.494443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.494475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.494649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.494681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.494804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.494840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.494966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.494997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.495257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.495289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.495470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.495502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.495630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.495660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.495960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.495992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.496123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.496154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.496337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.496369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.496515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.496546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.496672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.496704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.496896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.496927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.497113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.497144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.497273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.497304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.497417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.497448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.497611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.497680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.497830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.497865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.498062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.498094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.498220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.498268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.498454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.498486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.498697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.498729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.498907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.498938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.499118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.499151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.499283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.499316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.499455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.499486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.499603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.499635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.499757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.499787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.499925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.499956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.500265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.500306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.500437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.500469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.500606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.500637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.694 qpair failed and we were unable to recover it. 00:36:06.694 [2024-07-14 10:44:51.500819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.694 [2024-07-14 10:44:51.500850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.501102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.501133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.501311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.501342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.501466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.501497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.501708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.501739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.501922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.501953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.502174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.502206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.502409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.502440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.502691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.502722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.502908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.502939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.503122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.503153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.503318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.503351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.503545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.503576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.503751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.503782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.503973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.504004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.504251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.504283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.504469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.504500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.504620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.504651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.504772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.504803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.504942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.504973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.505232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.505263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.505443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.505474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.505688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.505719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.505907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.505937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.506059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.506090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.506285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.506316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.506507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.506537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.506717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.506748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.506861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.506892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.507101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.507131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.507314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.507347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.507456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.507487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.507618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.507649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.507893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.507924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.508036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.508067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.508197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.508235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.508481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.508512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.508706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.508742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.508877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.508907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.509098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.509128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.509243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.509274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.695 qpair failed and we were unable to recover it. 00:36:06.695 [2024-07-14 10:44:51.509395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.695 [2024-07-14 10:44:51.509426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.509622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.509653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.509834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.509865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.510058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.510089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.510236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.510268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.510404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.510436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.510564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.510595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.510733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.510764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.510893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.510924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.511118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.511149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.511339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.511372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.511555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.511585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.511831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.511863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.511981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.512012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.512280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.512312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.512463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.512494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.512666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.512696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.512851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.512881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.512987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.513018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.513261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.513293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.513474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.513505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.513681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.513713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.513982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.514013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.514238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.514269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.514449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.514480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.514729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.514760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.514966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.514997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.515173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.515204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.515328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.515360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.515546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.515577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.515751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.515781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.515916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.515947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.516072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.516103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.516284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.516315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.516556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.516587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.516777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.516808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.516939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.516975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.517084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.517116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.517361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.517393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.517578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.517609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.517722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.696 [2024-07-14 10:44:51.517753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.696 qpair failed and we were unable to recover it. 00:36:06.696 [2024-07-14 10:44:51.517924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.517955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.518134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.518165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.518384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.518416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.518538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.518568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.518741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.518772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.518947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.518978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.519114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.519146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.519271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.519302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.519424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.519455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.519570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.519601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.519785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.519816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.520021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.520052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.520269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.520302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.520550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.520581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.520782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.520813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.520932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.520963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.521135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.521166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.521279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.521311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.521438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.521469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.521710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.521740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.521863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.521894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.522075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.522106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.522294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.522327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.522543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.522573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.522773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.522804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.522938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.522969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.523091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.523122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.523374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.523406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.523618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.523650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.523918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.523949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.524143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.524174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.524305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.524336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.524533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.524564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.524694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.524725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.524856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.524887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.525091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.697 [2024-07-14 10:44:51.525133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.697 qpair failed and we were unable to recover it. 00:36:06.697 [2024-07-14 10:44:51.525344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.525377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.525566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.525596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.525706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.525737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.525913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.525944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.526131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.526162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.526343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.526376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.526557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.526587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.526707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.526739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.527006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.527038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.527153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.527184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.527385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.527417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.527596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.527627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.527745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.527776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.527911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.527942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.528138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.528168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.528360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.528391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.528577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.528608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.528729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.528759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.529002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.529033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.529216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.529254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.529452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.529482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.529595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.529624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.529762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.529793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.529971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.530001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.530187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.530218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.530420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.530451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.530702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.530733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.530937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.530968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.531149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.531179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.531446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.531478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.531685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.531716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.531927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.531958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.532135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.532166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.532418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.532449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.532580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.532611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.532812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.532843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.533021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.533051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.533239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.533271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.533513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.533545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.533800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.533836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.534103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.698 [2024-07-14 10:44:51.534135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.698 qpair failed and we were unable to recover it. 00:36:06.698 [2024-07-14 10:44:51.534281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.534314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.534560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.534591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.534839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.534869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.534997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.535027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.535154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.535185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.535385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.535416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.535655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.535685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.535815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.535847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.535971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.536002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.536249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.536280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.536399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.536430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.536629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.536661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.536791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.536822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.536958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.536990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.537128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.537158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.537407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.537440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.537577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.537608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.537731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.537762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.537939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.537970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.538075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.538106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.538245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.538277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.538402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.538433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.538609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.538639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.538860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.538891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.539069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.539099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.539299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.539332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.539525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.539556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.539689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.539720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.539906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.539937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.540145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.540176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.540329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.540360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.540558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.540588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.540755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.540786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.540960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.540991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.541170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.541203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.541400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.541432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.541698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.541729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.541986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.542016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.542262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.542293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.542516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.542547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.542741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.699 [2024-07-14 10:44:51.542772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.699 qpair failed and we were unable to recover it. 00:36:06.699 [2024-07-14 10:44:51.542992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.543023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.543203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.543242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.543527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.543558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.543678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.543709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.543848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.543879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.544145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.544175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.544359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.544392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.544510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.544541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.544683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.544714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.544979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.545009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.545277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.545309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.545582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.545613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.545834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.545866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.546113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.546144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.546333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.546365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.546494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.546524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.546663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.546694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.546886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.546917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.547107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.547138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.547322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.547354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.547550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.547582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.547756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.547787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.548000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.548031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.548252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.548284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.548417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.548454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.548626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.548656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.548925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.548956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.549095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.549125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.549317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.549350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.549524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.549555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.549800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.549830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.550020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.550051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.550242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.550273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.550401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.550431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.550628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.550659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.550900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.550931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.551173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.551204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.551450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.551482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.551615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.551646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.551868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.551899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.552166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.700 [2024-07-14 10:44:51.552197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.700 qpair failed and we were unable to recover it. 00:36:06.700 [2024-07-14 10:44:51.552475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.552507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.552696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.552727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.552968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.552999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.553264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.553296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.553421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.553451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.553575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.553606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.553781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.553812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.554018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.554049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.554191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.554222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.554405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.554436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.554687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.554718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.554895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.554925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.555046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.555078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.555347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.555379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.555582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.555613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.555728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.555759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.555893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.555924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.556122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.556153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.556344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.556376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.556477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.556508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.556751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.556782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.556976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.557007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.557129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.557160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.557272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.557310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.557485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.557516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.557641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.557671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.557776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.557807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.557995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.558026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.558213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.558251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.558429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.558461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.558658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.558689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.558932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.558962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.559090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.559121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.559250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.559282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.559407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.559439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.559614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.559645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.559820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.559852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.560101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.560132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.560258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.560290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.560433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.560464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.560580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.701 [2024-07-14 10:44:51.560611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.701 qpair failed and we were unable to recover it. 00:36:06.701 [2024-07-14 10:44:51.560732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.560762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.560932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.560964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.561161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.561192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.561335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.561367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.561545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.561578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.561723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.561754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.561871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.561902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.562159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.562190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.562391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.562423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.562557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.562587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.562710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.562740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.562917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.562948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.563206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.563247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.563425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.563456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.563648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.563679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.563872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.563902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.564146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.564176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.564386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.564418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.564544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.564575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.564839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.564870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.565095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.565126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.565281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.565313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.565500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.565536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.565783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.565814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.566021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.566052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.566241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.566273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.566537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.566569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.566686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.566716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.566908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.566939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.567146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.567177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.567380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.567412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.567606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.567638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.567815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.567847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.568110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.568141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.568333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.568366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.702 qpair failed and we were unable to recover it. 00:36:06.702 [2024-07-14 10:44:51.568580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.702 [2024-07-14 10:44:51.568612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.568813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.568844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.569031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.569063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.569260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.569292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.569488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.569519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.569652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.569683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.569899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.569930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.570188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.570218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.570353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.570384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.570520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.570551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.570696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.570727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.570979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.571010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.571136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.571167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.571307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.571338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.571606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.571637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.571743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.571774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.572052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.572083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.572348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.572380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.572506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.572537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.572646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.572677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.572796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.572826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.572950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.572981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.573173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.573203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.573408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.573440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.573567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.573598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.573863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.573894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.574092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.574123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.574246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.574285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.574539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.574570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.574748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.574778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.574964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.574996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.575172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.575204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.575420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.575453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.575592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.575624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.575779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.575809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.575931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.575961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.576181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.576212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.576427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.576459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.576638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.576669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.576913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.576943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.577120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.577150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.703 qpair failed and we were unable to recover it. 00:36:06.703 [2024-07-14 10:44:51.577401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.703 [2024-07-14 10:44:51.577434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.577637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.577667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.577842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.577873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.578074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.578104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.578220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.578261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.578455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.578486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.578623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.578654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.578895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.578926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.579045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.579076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.579269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.579302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.579499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.579531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.579657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.579687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.579866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.579897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.580029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.580060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.580308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.580339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.580509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.580540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.580679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.580710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.580825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.580856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.580965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.580996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.581107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.581137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.581242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.581274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.581463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.581494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.581673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.581705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.581834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.581864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.582051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.582081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.582257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.582288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.582480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.582515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.582690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.582721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.582878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.582909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.583151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.583182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.583406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.583438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.583550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.583580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.583769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.583800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.583917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.583948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.584068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.584099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.584237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.584270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.584391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.584421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.584615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.584645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.584852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.584883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.585006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.585036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.585235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.704 [2024-07-14 10:44:51.585268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.704 qpair failed and we were unable to recover it. 00:36:06.704 [2024-07-14 10:44:51.585450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.585482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.585721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.585751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.585943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.585974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.586106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.586137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.586335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.586368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.586554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.586585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.586773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.586804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.586992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.587023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.587142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.587173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.587364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.587396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.587663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.587694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.587938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.587969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.588180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.588211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.588403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.588435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.588612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.588642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.588749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.588780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.588998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.589029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.589209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.589247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.589440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.589471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.589642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.589673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.589854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.589885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.590015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.590046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.590239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.590270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.590542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.590573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.590814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.590844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.591024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.591059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.591254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.591287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.591478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.591510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.591699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.591729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.591916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.591948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.592212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.592252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.592392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.592423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.592611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.592641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.592854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.592885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.593077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.593109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.593245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.593277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.593460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.593491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.593763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.593794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.593916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.593946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.594132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.594163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.594339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.705 [2024-07-14 10:44:51.594372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.705 qpair failed and we were unable to recover it. 00:36:06.705 [2024-07-14 10:44:51.594500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.594531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.594660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.594691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.594863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.594893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.595069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.595099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.595302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.595333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.595577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.595608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.595741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.595772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.596009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.596039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.596219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.596259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.596383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.596413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.596538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.596569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.596687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.596718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.596909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.596940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.597118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.597148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.597278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.597310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.597559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.597589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.597778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.597809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.597993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.598024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.598217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.598258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.598444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.598474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.598606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.598637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.598827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.598857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.598994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.599025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.599265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.599298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.599408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.599444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.599657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.599689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.599826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.599857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.600061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.600092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.600280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.600312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.600509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.600540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.600662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.600693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.600819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.600849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.600957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.600988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.601185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.601216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.601420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.601451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.706 [2024-07-14 10:44:51.601643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.706 [2024-07-14 10:44:51.601675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.706 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.601880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.601910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.602198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.602234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.602482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.602513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.602724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.602755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.602973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.603003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.603186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.603217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.603508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.603539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.603763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.603794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.604005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.604036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.604326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.604358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.604540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.604571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.604745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.604776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.605018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.605049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.605338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.605369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.605637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.605668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.605988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.606059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.606306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.606348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.606526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.606558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.606820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.606851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.606988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.607019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.607257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.607290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.607470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.607500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.607700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.607732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.607907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.607938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.608065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.608097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.608285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.608317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.608504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.608535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.608775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.608807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.608983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.609015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.609260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.609291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.609480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.609511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.609730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.609761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.609963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.609995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.610182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.610213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.610482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.610515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.610801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.610832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.610964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.610995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.611194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.611238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.611419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.611451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.611640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.611670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.611793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.707 [2024-07-14 10:44:51.611823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.707 qpair failed and we were unable to recover it. 00:36:06.707 [2024-07-14 10:44:51.611953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.611984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.612173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.612207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.612356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.612388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.612660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.612691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.612870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.612901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.613038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.613069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.613285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.613317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.613502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.613533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.613722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.613752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.613933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.613965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.614147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.614177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.614322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.614353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.614481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.614514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.614704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.614735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.614996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.615027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.615243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.615276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.615472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.615503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.615750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.615781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.615926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.615956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.616148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.616178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.616309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.616341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.616545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.616577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.616764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.616794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.616996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.617026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.617221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.617263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.617483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.617513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.617706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.617737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.617867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.617897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.618024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.618055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.618170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.618201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.618392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.618424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.618670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.618701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.618886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.618917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.619101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.619132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.619315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.619347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.619540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.619571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.619760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.619791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.619925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.619955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.620243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.620274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.623467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.623502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.708 [2024-07-14 10:44:51.623721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.708 [2024-07-14 10:44:51.623752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.708 qpair failed and we were unable to recover it. 00:36:06.709 [2024-07-14 10:44:51.623935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.709 [2024-07-14 10:44:51.623976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.709 qpair failed and we were unable to recover it. 00:36:06.709 [2024-07-14 10:44:51.624179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.709 [2024-07-14 10:44:51.624210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.709 qpair failed and we were unable to recover it. 00:36:06.709 [2024-07-14 10:44:51.624460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.709 [2024-07-14 10:44:51.624491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.709 qpair failed and we were unable to recover it. 00:36:06.709 [2024-07-14 10:44:51.624683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.709 [2024-07-14 10:44:51.624713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.709 qpair failed and we were unable to recover it. 00:36:06.709 [2024-07-14 10:44:51.624926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.709 [2024-07-14 10:44:51.624958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.709 qpair failed and we were unable to recover it. 00:36:06.709 [2024-07-14 10:44:51.625140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.709 [2024-07-14 10:44:51.625171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.709 qpair failed and we were unable to recover it. 00:36:06.709 [2024-07-14 10:44:51.625429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.709 [2024-07-14 10:44:51.625460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.709 qpair failed and we were unable to recover it. 00:36:06.709 [2024-07-14 10:44:51.625711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.709 [2024-07-14 10:44:51.625742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.709 qpair failed and we were unable to recover it. 00:36:06.709 [2024-07-14 10:44:51.625945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.709 [2024-07-14 10:44:51.625976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.709 qpair failed and we were unable to recover it. 00:36:06.709 [2024-07-14 10:44:51.626169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.709 [2024-07-14 10:44:51.626200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.709 qpair failed and we were unable to recover it. 00:36:06.709 [2024-07-14 10:44:51.626391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.709 [2024-07-14 10:44:51.626422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.709 qpair failed and we were unable to recover it. 00:36:06.709 [2024-07-14 10:44:51.626644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.709 [2024-07-14 10:44:51.626676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.709 qpair failed and we were unable to recover it. 00:36:06.709 [2024-07-14 10:44:51.626899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.709 [2024-07-14 10:44:51.626930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.709 qpair failed and we were unable to recover it. 00:36:06.709 [2024-07-14 10:44:51.627064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.709 [2024-07-14 10:44:51.627094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.709 qpair failed and we were unable to recover it. 00:36:06.709 [2024-07-14 10:44:51.627346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.709 [2024-07-14 10:44:51.627378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.709 qpair failed and we were unable to recover it. 00:36:06.709 [2024-07-14 10:44:51.627519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.709 [2024-07-14 10:44:51.627550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.709 qpair failed and we were unable to recover it. 00:36:06.709 [2024-07-14 10:44:51.627737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.709 [2024-07-14 10:44:51.627768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.709 qpair failed and we were unable to recover it. 00:36:06.709 [2024-07-14 10:44:51.628036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.709 [2024-07-14 10:44:51.628067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.709 qpair failed and we were unable to recover it. 00:36:06.709 [2024-07-14 10:44:51.628243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.709 [2024-07-14 10:44:51.628275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.709 qpair failed and we were unable to recover it. 00:36:06.709 [2024-07-14 10:44:51.628512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.709 [2024-07-14 10:44:51.628543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.709 qpair failed and we were unable to recover it. 00:36:06.709 [2024-07-14 10:44:51.628660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.709 [2024-07-14 10:44:51.628691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.709 qpair failed and we were unable to recover it. 00:36:06.709 [2024-07-14 10:44:51.628866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.709 [2024-07-14 10:44:51.628897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.709 qpair failed and we were unable to recover it. 00:36:06.709 [2024-07-14 10:44:51.629081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.709 [2024-07-14 10:44:51.629112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.709 qpair failed and we were unable to recover it. 00:36:06.709 [2024-07-14 10:44:51.629361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.709 [2024-07-14 10:44:51.629393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.709 qpair failed and we were unable to recover it. 00:36:06.709 [2024-07-14 10:44:51.629524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.709 [2024-07-14 10:44:51.629555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.709 qpair failed and we were unable to recover it. 00:36:06.709 [2024-07-14 10:44:51.629733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.709 [2024-07-14 10:44:51.629764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.709 qpair failed and we were unable to recover it. 00:36:06.709 [2024-07-14 10:44:51.629898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.709 [2024-07-14 10:44:51.629928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.709 qpair failed and we were unable to recover it. 00:36:06.709 [2024-07-14 10:44:51.630078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.709 [2024-07-14 10:44:51.630110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.709 qpair failed and we were unable to recover it. 00:36:06.709 [2024-07-14 10:44:51.630304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.709 [2024-07-14 10:44:51.630337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.709 qpair failed and we were unable to recover it. 00:36:06.988 [2024-07-14 10:44:51.630463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.988 [2024-07-14 10:44:51.630494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.988 qpair failed and we were unable to recover it. 00:36:06.988 [2024-07-14 10:44:51.630705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.988 [2024-07-14 10:44:51.630736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.988 qpair failed and we were unable to recover it. 00:36:06.988 [2024-07-14 10:44:51.630958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.988 [2024-07-14 10:44:51.630989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.631096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.631126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.631265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.631295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.631471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.631500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.631683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.631712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.631926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.631957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.632135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.632166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.632353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.632384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.632574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.632604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.632728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.632764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.632959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.632990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.633237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.633267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.633447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.633478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.633687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.633718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.633906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.633936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.634112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.634141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.634319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.634351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.634451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.634480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.634775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.634806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.634996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.635026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.635151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.635182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.635349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.635381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.635651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.635681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.635830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.635860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.636061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.636091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.636374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.636406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.636540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.636571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.636768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.636798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.636977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.637009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.637277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.637308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.637501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.637532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.637716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.637746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.637955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.637986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.638220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.638258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.638389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.638419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.638543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.638573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.638754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.638785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.638910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.638940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.639069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.639099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.639279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.639310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.639553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.639584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.639825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.989 [2024-07-14 10:44:51.639855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.989 qpair failed and we were unable to recover it. 00:36:06.989 [2024-07-14 10:44:51.640142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.640173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.640322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.640353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.640608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.640640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.640825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.640856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.641060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.641091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.641335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.641366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.641477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.641507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.641638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.641674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.641928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.641959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.642140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.642170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.642353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.642384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.642491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.642521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.642650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.642680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.642805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.642836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.643103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.643134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.643327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.643359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.643506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.643538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.643712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.643742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.643923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.643954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.644148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.644179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.644429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.644461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.644709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.644740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.644865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.644896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.645026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.645056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.645268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.645300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.645517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.645548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.645685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.645715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.645902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.645933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.646108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.646138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.646384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.646416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.646590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.646620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.646860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.646890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.647149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.647180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.647470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.647502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.647629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.647661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.647875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.647905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.648037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.648068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.648265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.648297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.648486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.648517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.648696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.648726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.648906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.648937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.649177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.649208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.649406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.649437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.649560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.990 [2024-07-14 10:44:51.649590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.990 qpair failed and we were unable to recover it. 00:36:06.990 [2024-07-14 10:44:51.649722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.649752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.650018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.650049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.650237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.650269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.650449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.650484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.650663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.650693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.650871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.650901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.651042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.651074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.651321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.651353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.651532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.651563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.651693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.651724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.651993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.652024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.652140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.652171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.652319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.652351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.652482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.652512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.652714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.652744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.652886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.652916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.653089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.653120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.653317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.653349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.653593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.653624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.653811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.653841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.653977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.654008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.654181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.654211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.654341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.654371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.654616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.654646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.654846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.654877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.655011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.655042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.655256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.655288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.655424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.655454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.655577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.655608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.655849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.655879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.656085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.656116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.656313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.656345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.656521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.656551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.656675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.656706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.656917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.656948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.657191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.657221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.657355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.657386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.657666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.657696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.657938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.657969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.658094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.658125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.658316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.658348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.658485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.658516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.658693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.658723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.991 qpair failed and we were unable to recover it. 00:36:06.991 [2024-07-14 10:44:51.658856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.991 [2024-07-14 10:44:51.658892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.659094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.659125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.659249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.659281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.659487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.659517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.659640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.659670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.659866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.659896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.660069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.660100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.660277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.660309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.660482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.660512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.660707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.660738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.660982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.661014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.661200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.661238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.661417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.661448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.661637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.661668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.661944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.661974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.662109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.662140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.662263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.662296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.662499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.662530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.662731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.662762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.662953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.662984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.663096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.663126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.663316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.663347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.663639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.663670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.663799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.663831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.664030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.664061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.664242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.664273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.664402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.664433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.664620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.664652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.664914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.664945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.665141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.665172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.665289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.665320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.665565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.665595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.665720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.665751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.665993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.666023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.666153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.666182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.666367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.666399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.666644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.666675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.992 [2024-07-14 10:44:51.666816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.992 [2024-07-14 10:44:51.666847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.992 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.666973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.667004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.667271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.667302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.667494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.667530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.667672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.667703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.667825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.667856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.667992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.668022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.668213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.668253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.668361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.668392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.668564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.668595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.668786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.668816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.669055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.669085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.669262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.669293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.669568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.669599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.669789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.669819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.670098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.670129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.670262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.670294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.670429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.670460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.670597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.670627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.670800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.670831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.671023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.671054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.671247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.671279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.671396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.671427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.671598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.671628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.671746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.671777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.671972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.672002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.672176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.672205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.672409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.672440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.672618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.672649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.672833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.672863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.673108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.673138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.673413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.673444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.673577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.673607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.673850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.673879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.674080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.674111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.674357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.674389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.674576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.674606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.674820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.674850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.675029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.675061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.675188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.675218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.675485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.675516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.675702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.675732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.675917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.675948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.993 [2024-07-14 10:44:51.676190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.993 [2024-07-14 10:44:51.676236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.993 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.676488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.676519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.676631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.676661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.676838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.676868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.676987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.677017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.677235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.677267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.677444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.677475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.677589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.677619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.677766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.677797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.678041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.678070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.678199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.678249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.678384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.678415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.678591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.678622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.678814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.678844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.678955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.678986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.679177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.679208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.679486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.679518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.679691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.679722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.679965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.679995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.680170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.680201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.680478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.680511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.680706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.680737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.680927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.680958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.681175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.681205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.681423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.681455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.681734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.681765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.682007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.682037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.682306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.682338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.682523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.682554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.682691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.682722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.682982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.683012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.683149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.683180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.683299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.683331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.683517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.683547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.683749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.683779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.683986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.684017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.684138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.684168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.684362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.684394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.684571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.684601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.684777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.684808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.684926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.684963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.685165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.685196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.685345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.685377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.685492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.685522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.994 [2024-07-14 10:44:51.685651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.994 [2024-07-14 10:44:51.685681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.994 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.685867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.685898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.686031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.686062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.686268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.686301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.686427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.686458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.686698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.686728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.686850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.686880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.687051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.687081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.687221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.687259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.687458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.687489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.687671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.687702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.687964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.687995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.688102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.688132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.688326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.688358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.688551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.688581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.688691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.688721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.688842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.688872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.689083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.689113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.689327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.689359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.689550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.689581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.689796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.689827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.689962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.689993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.690315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.690347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.690506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.690576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.690725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.690759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.690935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.690967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.691142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.691176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.691456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.691488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.691609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.691638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.691833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.691864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.692054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.692084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.692218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.692257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.692391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.692422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.692614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.692645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.692816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.692847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.693114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.693145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.693339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.693379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.693646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.693677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.693807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.693837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.693981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.694012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.694187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.694217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.694498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.694529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.694655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.694685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.694815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.694847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.995 [2024-07-14 10:44:51.695037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.995 [2024-07-14 10:44:51.695067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.995 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.695262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.695294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.695475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.695505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.695709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.695741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.695872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.695903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.696143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.696174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.696312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.696344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.696523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.696553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.696728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.696758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.696970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.697000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.697144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.697175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.697366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.697399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.697609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.697640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.697884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.697915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.698105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.698136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.698335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.698367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.698561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.698593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.698789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.698820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.699017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.699048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.699297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.699329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.699453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.699484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.699608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.699639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.699834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.699866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.700040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.700071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.700191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.700222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.700369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.700399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.700514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.700545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.700764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.700795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.700970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.701001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.701263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.701294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.701463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.701494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.701776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.701807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.701932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.701968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.702171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.702202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.702411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.702443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.702586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.702617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.702739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.702770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.702958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.702989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.703165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.703195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.996 [2024-07-14 10:44:51.703474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.996 [2024-07-14 10:44:51.703508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.996 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.703698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.703729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.703850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.703881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.704077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.704108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.704241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.704273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.704408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.704440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.704649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.704680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.704825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.704857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.704973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.705003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.705145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.705176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.705385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.705417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.705690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.705721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.705924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.705955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.706096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.706128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.706318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.706351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.706474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.706505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.706680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.706711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.706897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.706927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.707109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.707141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.707366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.707398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.707576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.707645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.707868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.707903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.708118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.708150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.708293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.708327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.708524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.708558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.708838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.708872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.709064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.709095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.709283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.709319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.709518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.709551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.709764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.709794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.709939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.709970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.710215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.710259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.710396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.710428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.710555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.710595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.710772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.710802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.711049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.711079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.711267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.711300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.711494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.711525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.711660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.711690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.711924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.711955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.712073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.712103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.712300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.712331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.712465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.712496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.712675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.712706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.712849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.997 [2024-07-14 10:44:51.712880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.997 qpair failed and we were unable to recover it. 00:36:06.997 [2024-07-14 10:44:51.713162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.713192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.713470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.713502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.713643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.713674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.713935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.713966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.714085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.714116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.714303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.714335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.714553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.714584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.714851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.714882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.715029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.715059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.715251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.715282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.715418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.715448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.715629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.715661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.715903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.715934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.716117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.716147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.716288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.716319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.716503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.716534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.716792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.716823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.717092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.717124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.717256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.717288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.717467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.717498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.717621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.717651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.717828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.717858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.718075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.718106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.718360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.718392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.718639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.718670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.718856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.718886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.719076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.719107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.719234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.719267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.719515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.719551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.719745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.719776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.719902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.719932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.720197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.720235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.720456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.720486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.720605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.720636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.720822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.720853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.720988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.721020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.721133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.721164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.721359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.721391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.721569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.721600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.721728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.721758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.721935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.721966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.722105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.722134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.722334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.722366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.722501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.998 [2024-07-14 10:44:51.722532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.998 qpair failed and we were unable to recover it. 00:36:06.998 [2024-07-14 10:44:51.722710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.722741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.722858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.722888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.723066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.723095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.723240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.723273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.723490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.723521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.723643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.723671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.723879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.723910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.724026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.724058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.724323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.724355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.724627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.724658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.724777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.724808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.724999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.725029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.725161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.725192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.725374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.725406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.725633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.725664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.725913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.725943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.726118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.726148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.726340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.726371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.726557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.726588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.726714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.726745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.726880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.726910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.727098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.727129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.727312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.727344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.727489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.727520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.727697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.727729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.727868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.727898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.728041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.728072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.728256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.728288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.728501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.728531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.728713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.728743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.728881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.728912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.729105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.729137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.729439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.729471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.729677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.729709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.729888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.729919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.730120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.730150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.730296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.730327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.730460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.730491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.730681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.730712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.730834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.730864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.731042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.999 [2024-07-14 10:44:51.731072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:06.999 qpair failed and we were unable to recover it. 00:36:06.999 [2024-07-14 10:44:51.731288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.731321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.731509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.731540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.731729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.731760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.731873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.731904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.732128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.732158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.732279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.732309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.732431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.732461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.732715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.732747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.733025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.733056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.733322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.733353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.733460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.733496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.733698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.733728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.733919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.733950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.734219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.734261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.734456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.734487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.734634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.734663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.734844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.734873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.735115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.735146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.735270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.735303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.735526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.735558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.735684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.735714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.735838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.735868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.736047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.736079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.736286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.736317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.736517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.736547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.736737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.736767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.736895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.736926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.737112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.737144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.737336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.737368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.737476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.737506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.737626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.737657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.737855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.737886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.738104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.738134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.738245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.738276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.738528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.738560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.738676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.738707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.738826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.738856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.738978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.000 [2024-07-14 10:44:51.739010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.000 qpair failed and we were unable to recover it. 00:36:07.000 [2024-07-14 10:44:51.739200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.739241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.739425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.739456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.739574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.739604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.739800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.739831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.739974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.740005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.740132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.740162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.740292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.740325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.740593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.740623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.740808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.740839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.740976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.741007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.741275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.741307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.741425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.741455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.741565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.741600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.741733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.741763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.741934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.741965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.742104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.742136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.742313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.742345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.742479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.742509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.742639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.742670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.742855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.742886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.743012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.743042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.743305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.743336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.743573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.743604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.743814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.743844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.744110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.744142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.744262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.744294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.744424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.744456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.744703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.744734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.745000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.745031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.745164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.745195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.745334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.745365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.745566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.745595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.745876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.745907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.746020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.746051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.746242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.746275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.746387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.746417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.746615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.001 [2024-07-14 10:44:51.746647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.001 qpair failed and we were unable to recover it. 00:36:07.001 [2024-07-14 10:44:51.746781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.746812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.747007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.747038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.747171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.747202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.747413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.747445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.747653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.747683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.747822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.747853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.748113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.748144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.748386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.748419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.748602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.748633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.748808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.748839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.748974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.749005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.749245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.749277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.749494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.749525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.749705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.749737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.749942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.749972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.750176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.750212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.750348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.750378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.750503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.750534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.750724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.750755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.750876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.750907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.751025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.751054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.751202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.751240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.751508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.751539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.751667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.751698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.751909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.751939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.752267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.752300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.752411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.752441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.752662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.752692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.752827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.752857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.753073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.753105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.753232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.753264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.753456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.753487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.753616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.753646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.753840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.753872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.754119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.754149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.754352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.754384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.754592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.754622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.754801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.754832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.755014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.755045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.755242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.755273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.755483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.002 [2024-07-14 10:44:51.755514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.002 qpair failed and we were unable to recover it. 00:36:07.002 [2024-07-14 10:44:51.755710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.755740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.755859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.755889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.756037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.756068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.756269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.756301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.756487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.756517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.756797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.756827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.756948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.756976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.757098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.757129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.757255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.757286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.757395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.757426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.757719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.757749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.757967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.757998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.758140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.758171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.758376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.758408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.758540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.758575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.758764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.758793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.758932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.758963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.759081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.759109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.759303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.759333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.759478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.759510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.759691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.759721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.759934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.759965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.760084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.760114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.760308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.760340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.760546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.760577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.760755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.760786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.760922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.760952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.761155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.761186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.761392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.761423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.761561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.761590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.761722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.761752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.761876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.761906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.762024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.762056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.762174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.762204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.762350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.762380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.762501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.762531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.762726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.762755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.762932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.762962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.763083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.763114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.763359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.763390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.003 [2024-07-14 10:44:51.763586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.003 [2024-07-14 10:44:51.763616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.003 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.763893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.763924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.764052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.764083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.764201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.764247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.764491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.764522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.764814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.764845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.764956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.764993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.765246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.765277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.765393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.765422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.765605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.765635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.765807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.765837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.766026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.766056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.766181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.766211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.766487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.766518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.766691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.766728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.766972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.767003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.767195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.767235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.767367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.767398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.767534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.767564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.767746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.767777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.767957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.767988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.768169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.768199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.768350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.768381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.768641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.768670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.768929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.768959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.769092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.769122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.769303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.769335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.769531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.769562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.769748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.769778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.770069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.770100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.770299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.770330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.770465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.770495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.770685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.770715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.770972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.771002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.771199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.771237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.771444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.771474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.771720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.771750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.771945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.771974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.772094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.772124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.772249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.772280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.772412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.772442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.004 [2024-07-14 10:44:51.772578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.004 [2024-07-14 10:44:51.772610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.004 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.772786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.772817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.773083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.773113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.773306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.773339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.773558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.773588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.773775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.773806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.774101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.774131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.774257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.774288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.774470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.774501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.774634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.774665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.774799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.774830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.775111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.775143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.775276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.775307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.775437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.775473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.775601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.775631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.775819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.775849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.775961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.775991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.776190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.776221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.776534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.776566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.776760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.776790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.777003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.777034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.777245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.777277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.777538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.777570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.777760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.777791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.777988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.778019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.778212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.778252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.778363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.778394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.778532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.778563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.778755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.778784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.778956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.778986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.779170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.779200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.779388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.779418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.779646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.779677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.779800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.779831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.780028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.780060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.005 qpair failed and we were unable to recover it. 00:36:07.005 [2024-07-14 10:44:51.780258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.005 [2024-07-14 10:44:51.780289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.780553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.780584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.780711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.780741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.780932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.780963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.781086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.781117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.781306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.781338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.781525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.781556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.781745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.781776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.781901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.781931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.782048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.782078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.782188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.782217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.782419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.782450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.782572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.782603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.782718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.782748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.782934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.782966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.783101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.783133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.783375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.783412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.783610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.783641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.783777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.783811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.783944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.783974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.784285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.784323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.784547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.784576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.784696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.784726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.784921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.784952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.785138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.785167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.785312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.785348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.785462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.785493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.785605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.785635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.785752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.785784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.785957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.785987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.786121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.786151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.786275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.786308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.786491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.786522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.786656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.786685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.786955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.786986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.787183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.787215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.787466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.787497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.787707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.787738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.787930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.787960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.788090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.788120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.788271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.788302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.006 qpair failed and we were unable to recover it. 00:36:07.006 [2024-07-14 10:44:51.788569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.006 [2024-07-14 10:44:51.788599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.788793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.788823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.789020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.789050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.789235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.789266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.789384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.789415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.789563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.789594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.789791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.789821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.790004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.790035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.790213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.790251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.790368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.790397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.790600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.790630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.790755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.790785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.790995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.791026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.791144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.791174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.791367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.791397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.791572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.791602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.791792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.791822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.792009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.792045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.792245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.792277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.792389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.792419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.792535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.792566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.792672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.792702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.792824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.792855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.793091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.793122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.793246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.793277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.793456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.793486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.793637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.793666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.793853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.793884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.794083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.794114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.794239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.794270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.794399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.794429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.794624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.794655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.794784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.794814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.794937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.794966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.795233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.795265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.795518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.795549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.795691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.795722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.795855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.795887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.796073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.796103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.796254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.796286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.796480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.796511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.007 qpair failed and we were unable to recover it. 00:36:07.007 [2024-07-14 10:44:51.796638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.007 [2024-07-14 10:44:51.796668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.796859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.796888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.796996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.797026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.797279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.797311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.797489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.797521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.797643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.797672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.797795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.797825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.798073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.798104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.798280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.798312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.798486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.798516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.798700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.798731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.798858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.798888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.799131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.799162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.799407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.799438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.799616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.799647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.799827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.799857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.800032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.800069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.800258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.800290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.800476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.800507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.800631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.800661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.800943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.800973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.801149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.801179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.801314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.801346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.801589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.801620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.801801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.801832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.802073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.802104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.802290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.802320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.802439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.802468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.802588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.802620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.802823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.802852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.803131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.803161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.803286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.803318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.803461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.803491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.803679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.803709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.803841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.803872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.804143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.804173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.804379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.804411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.804545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.804576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.804767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.804796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.804922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.804952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.805138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.008 [2024-07-14 10:44:51.805169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.008 qpair failed and we were unable to recover it. 00:36:07.008 [2024-07-14 10:44:51.805374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.805405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.805599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.805629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.805858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.805929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.806135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.806170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.806382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.806416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.806555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.806587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.806779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.806810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.807052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.807083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.807203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.807242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.807455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.807486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.807690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.807721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.807851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.807882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.808000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.808031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.808211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.808271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.808549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.808582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.808803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.808835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.809093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.809124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.809320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.809355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.809536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.809567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.809835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.809867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.809979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.810010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.810256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.810288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.810500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.810532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.810656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.810687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.810879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.810910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.811056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.811087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.811280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.811311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.811449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.811481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.811661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.811692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.811823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.811866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.812043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.812074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.812263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.812295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.812490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.812521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.812712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.812743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.812876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.812907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.813108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.009 [2024-07-14 10:44:51.813139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.009 qpair failed and we were unable to recover it. 00:36:07.009 [2024-07-14 10:44:51.813418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.813454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.813640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.813673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.813860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.813891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.814044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.814076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.814218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.814259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.814373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.814404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.814527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.814558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.814763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.814794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.814912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.814943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.815123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.815155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.815343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.815375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.815559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.815591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.815909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.815940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.816145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.816176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.816374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.816407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.816528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.816560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.816742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.816774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.816912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.816943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.817079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.817110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.817296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.817333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.817525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.817563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.817854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.817886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.818072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.818103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.818280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.818312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.818526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.818557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.818809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.818840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.819021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.819052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.819294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.819326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.819524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.819554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.819665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.819696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.819823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.819854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.820116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.820147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.820268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.820300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.820484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.820516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.820707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.820738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.820927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.820958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.821091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.821122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.821251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.821292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.821475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.821507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.821765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.821796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.822035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.822066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.010 qpair failed and we were unable to recover it. 00:36:07.010 [2024-07-14 10:44:51.822264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.010 [2024-07-14 10:44:51.822297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.822434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.822466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.822604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.822635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.822810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.822841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.822970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.823001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.823189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.823220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.823352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.823389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.823632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.823663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.823872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.823903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.824031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.824062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.824246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.824277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.824556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.824587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.824766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.824798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.825060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.825091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.825202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.825245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.825435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.825467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.825595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.825626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.825826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.825858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.825989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.826021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.826147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.826178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.826373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.826406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.826632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.826663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.826796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.826827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.827076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.827107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.827289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.827321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.827515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.827546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.827834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.827866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.828052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.828083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.828326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.828359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.828551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.828583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.828783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.828814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.828953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.828984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.829122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.829154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.829337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.829371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.829501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.829532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.829799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.829830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.830029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.830061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.830270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.830303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.830576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.830607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.830736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.830767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.830888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.830919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.011 [2024-07-14 10:44:51.831029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.011 [2024-07-14 10:44:51.831060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.011 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.831245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.831277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.831412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.831443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.831562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.831593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.831793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.831823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.832027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.832058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.832199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.832241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.832433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.832463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.832646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.832676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.832853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.832884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.833059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.833090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.833338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.833373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.833583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.833614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.833807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.833838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.833946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.833978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.834264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.834296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.834495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.834526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.834658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.834690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.834835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.834866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.835108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.835139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.835388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.835423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.835539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.835570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.835776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.835808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.836007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.836037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.836194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.836244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.836420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.836451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.836582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.836613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.836801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.836833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.837020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.837050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.837175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.837205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.837327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.837359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.837488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.837520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.837651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.837683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.837950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.837986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.838258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.838290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.838563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.838594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.838703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.838734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.838947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.838978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.839166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.839197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.839337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.839372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.839493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.839524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.839661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.012 [2024-07-14 10:44:51.839692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.012 qpair failed and we were unable to recover it. 00:36:07.012 [2024-07-14 10:44:51.839871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.839902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.840038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.840068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.840262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.840295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.840416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.840448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.840629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.840660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.840807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.840838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.841033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.841064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.841247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.841279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.841390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.841432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.841699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.841730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.841907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.841938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.842047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.842078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.842204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.842246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.842497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.842527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.842729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.842760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.842936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.842968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.843099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.843130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.843325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.843360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.843488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.843526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.843641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.843672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.843804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.843835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.844022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.844054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.844275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.844307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.844497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.844528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.844669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.844700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.844821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.844851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.844962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.844993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.845123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.845154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.845262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.845293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.845438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.845470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.845585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.845616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.845757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.845789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.845972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.846004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.846250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.846282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.846397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.846428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.846605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.013 [2024-07-14 10:44:51.846635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.013 qpair failed and we were unable to recover it. 00:36:07.013 [2024-07-14 10:44:51.846750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.846781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.846914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.846945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.847119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.847149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.847353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.847388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.847533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.847564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.847744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.847774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.847960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.847991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.848118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.848149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.848327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.848359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.848486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.848517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.848703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.848734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.848909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.848941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.849064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.849095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.849221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.849261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.849471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.849502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.849710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.849740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.849935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.849967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.850150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.850181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.850363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.850395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.850610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.850641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.850891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.850922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.851112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.851143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.851273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.851308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.851579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.851611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.851749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.851779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.851967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.851997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.852114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.852145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.852409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.852441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.852622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.852652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.852783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.852813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.853020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.853050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.853269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.853301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.853554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.853585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.853779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.853810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.853924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.853955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.854080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.854111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.854377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.854408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.854674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.854705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.854877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.854908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.855037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.855068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.855354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.014 [2024-07-14 10:44:51.855390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.014 qpair failed and we were unable to recover it. 00:36:07.014 [2024-07-14 10:44:51.855515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.855546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.855789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.855820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.855940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.855972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.856167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.856198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.856357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.856389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.856586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.856618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.856820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.856851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.857033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.857064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.857259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.857291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.857403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.857440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.857571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.857602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.857715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.857746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.857989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.858020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.858124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.858154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.858275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.858306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.858550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.858581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.858775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.858807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.858982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.859012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.859278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.859313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.859444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.859476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.859669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.859700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.859875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.859907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.860045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.860076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.860270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.860301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.860418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.860448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.860668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.860699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.860919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.860950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.861165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.861197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.861319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.861350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.861481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.861512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.861626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.861657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.861917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.861948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.862079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.862111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.862244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.862275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.862475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.862507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.862696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.862727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.862902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.862941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.863077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.863108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.863372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.863407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.863588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.863618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.863734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.863766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.015 qpair failed and we were unable to recover it. 00:36:07.015 [2024-07-14 10:44:51.863896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.015 [2024-07-14 10:44:51.863928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.864117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.864149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.864328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.864360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.864625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.864655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.864929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.864960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.865068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.865099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.865237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.865269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.865515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.865545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.865788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.865820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.866100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.866132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.866340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.866372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.866659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.866690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.866944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.866974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.867150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.867181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.867470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.867505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.867772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.867803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.867993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.868025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.868160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.868190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.868417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.868450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.868717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.868748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.869041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.869072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.869283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.869315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.869556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.869592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.869718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.869749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.869944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.869975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.870183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.870215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.870474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.870506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.870776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.870807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.870940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.870971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.871258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.871296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.871563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.871594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.871778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.871809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.872072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.872103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.872242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.872275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.872456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.872487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.872669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.872701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.872847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.872879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.873001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.873032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.873207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.873250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.873453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.873484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.873679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.873710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.016 [2024-07-14 10:44:51.873907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.016 [2024-07-14 10:44:51.873938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.016 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.874125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.874156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.874275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.874307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.874503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.874534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.874776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.874806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.874941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.874972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.875217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.875270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.875455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.875486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.875682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.875713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.875926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.875958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.876102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.876133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.876319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.876351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.876591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.876622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.876864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.876894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.877015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.877046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.877190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.877221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.877522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.877552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.877676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.877708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.877949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.877980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.878166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.878197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.878392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.878424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.878534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.878563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.878691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.878723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.878966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.878997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.879187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.879218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.879432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.879466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.879668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.879699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.879924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.879955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.880086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.880119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.880361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.880393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.880567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.880598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.880778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.880809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.880950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.880981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.881109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.881141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.881282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.881314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.881537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.881568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.881766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.881797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.881981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.882011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.882200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.882237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.882516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.882547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.882741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.882772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.883037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.017 [2024-07-14 10:44:51.883068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.017 qpair failed and we were unable to recover it. 00:36:07.017 [2024-07-14 10:44:51.883265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.883300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.883491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.883522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.883698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.883729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.883862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.883892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.884021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.884053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.884176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.884208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.884427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.884458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.884594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.884632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.884873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.884904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.885087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.885118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.885323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.885355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.885619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.885650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.885826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.885857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.886030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.886062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.886242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.886274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.886385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.886417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.886685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.886717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.886850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.886881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.887124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.887154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.887386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.887421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.887659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.887689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.887804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.887836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.888053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.888084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.888271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.888303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.888481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.888512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.888699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.888730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.888973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.889004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.889260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.889293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.889471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.889502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.889745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.889777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.890025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.890056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.890305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.890336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.890619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.890650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.890835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.890866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.891134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.891171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.891372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.891407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.018 qpair failed and we were unable to recover it. 00:36:07.018 [2024-07-14 10:44:51.891603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.018 [2024-07-14 10:44:51.891634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.891765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.891796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.892071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.892102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.892280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.892312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.892507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.892538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.892659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.892690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.892901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.892932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.893110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.893141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.893349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.893382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.893504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.893534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.893745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.893776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.893968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.893999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.894247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.894279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.894404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.894436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.894680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.894711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.894841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.894872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.895117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.895148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.895331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.895366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.895538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.895570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.895787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.895817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.896010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.896041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.896245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.896278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.896402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.896433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.896614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.896645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.896768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.896799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.897012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.897043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.897173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.897204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.897430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.897462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.897665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.897696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.897821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.897852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.898045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.898076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.898274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.898305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.898555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.898587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.898762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.898794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.899008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.899039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.899160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.899192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.899385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.899419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.899661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.899692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.899963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.899994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.900190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.900222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.900350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.900381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.019 [2024-07-14 10:44:51.900598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.019 [2024-07-14 10:44:51.900629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.019 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.900897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.900928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.901174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.901205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.901338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.901370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.901562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.901592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.901774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.901804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.901923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.901953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.902116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.902147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.902345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.902377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.902505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.902536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.902798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.902829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.903019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.903051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.903246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.903285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.903490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.903521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.903785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.903817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.904072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.904103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.904220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.904280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.904406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.904437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.904619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.904650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.904917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.904948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.905127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.905158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.905352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.905384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.905528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.905559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.905754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.905786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.906028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.906059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.906352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.906390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.906568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.906600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.906778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.906810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.907025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.907056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.907300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.907335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.907597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.907629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.907874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.907905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.908095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.908125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.908309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.908341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.908469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.908501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.908732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.908763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.908886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.908916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.909117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.909148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.909392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.909424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.909639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.909670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.909859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.909890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.910067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.020 [2024-07-14 10:44:51.910098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.020 qpair failed and we were unable to recover it. 00:36:07.020 [2024-07-14 10:44:51.910338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.910370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.910557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.910588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.910694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.910723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.910967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.910998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.911128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.911158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.911343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.911378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.911555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.911586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.911846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.911878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.912002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.912033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.912251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.912284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.912529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.912565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.912777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.912808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.912937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.912968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.913152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.913183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.913333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.913364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.913519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.913550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.913674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.913705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.913947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.913978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.914126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.914157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.914344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.914377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.914501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.914532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.914705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.914736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.915002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.915033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.915282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.915318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.915454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.915485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.915741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.915771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.915968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.915999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.916120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.916152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.916339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.916371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.916496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.916527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.916722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.916753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.916928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.916959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.917140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.917171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.917369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.917401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.917526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.917557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.917665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.917695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.917942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.917974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.918174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.918210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.918391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.918422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.918523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.918554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.918796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.918826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.021 qpair failed and we were unable to recover it. 00:36:07.021 [2024-07-14 10:44:51.919025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.021 [2024-07-14 10:44:51.919057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.919251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.919289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.919495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.919526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.919713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.919744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.919920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.919951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.920074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.920105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.920281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.920313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.920595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.920625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.920812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.920843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.921133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.921165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.921371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.921404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.921587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.921617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.921757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.921787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.922049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.922080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.922347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.922379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.922573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.922603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.922717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.922748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.923012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.923044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.923262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.923299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.923562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.923592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.923780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.923812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.923984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.924014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.924135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.924167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.924378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.924409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.924551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.924582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.924704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.924736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.924945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.924975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.925083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.925114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.925382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.925414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.925680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.925711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.925833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.925864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.925998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.926029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.926223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.926262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.926458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.926488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.926752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.926784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.926975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.022 [2024-07-14 10:44:51.927006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-07-14 10:44:51.927204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.927251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.927461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.927494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.927734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.927765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.928048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.928079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.928271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.928304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.928420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.928452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.928636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.928668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.928854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.928884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.929061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.929092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.929295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.929326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.929502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.929533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.929737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.929768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.929949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.929979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.930161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.930192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.930313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.930345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.930593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.930624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.930734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.930765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.930869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.930900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.931092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.931122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.931258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.931297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.931419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.931449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.931627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.931658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.931875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.931906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.932110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.932141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.932408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.932440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.932560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.932592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.932714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.932744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.932861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.932892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.933024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.933060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.933245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.933276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.933413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.933443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.933620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.933651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.933827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.933858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.933977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.934008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.934176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.934207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.934407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.934439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.934637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.934668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.934861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.934892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.935136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.935167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.935281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.935317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.935559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.935589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-07-14 10:44:51.935833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.023 [2024-07-14 10:44:51.935864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.936121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.936153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.936317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.936350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.936618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.936649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.936846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.936877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.937067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.937099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.937286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.937319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.937522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.937553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.937807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.937837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.938114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.938145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.938286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.938319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.938592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.938623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.938832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.938862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.939001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.939032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.939276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.939316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.939513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.939544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.939679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.939711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.939894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.939925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.940107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.940137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.940262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.940295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.940539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.940571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.940779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.940811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.940986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.941017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.941263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.941295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.941541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.941575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.941700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.941732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.941973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.942004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.942258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.942290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.942506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.942537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.942785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.942816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.942934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.942966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.943170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.943202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.943418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.943452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.943641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.943672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.943883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.943914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.944095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.944126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.944267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.944300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.944496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.944527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.944660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.944690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.944938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.944968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.945145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.945176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.945310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.024 [2024-07-14 10:44:51.945342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-07-14 10:44:51.945555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.025 [2024-07-14 10:44:51.945586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.025 qpair failed and we were unable to recover it. 00:36:07.025 [2024-07-14 10:44:51.945703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.025 [2024-07-14 10:44:51.945734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.025 qpair failed and we were unable to recover it. 00:36:07.025 [2024-07-14 10:44:51.945938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.025 [2024-07-14 10:44:51.945968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.025 qpair failed and we were unable to recover it. 00:36:07.025 [2024-07-14 10:44:51.946094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.025 [2024-07-14 10:44:51.946124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.025 qpair failed and we were unable to recover it. 00:36:07.025 [2024-07-14 10:44:51.946365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.025 [2024-07-14 10:44:51.946397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.025 qpair failed and we were unable to recover it. 00:36:07.025 [2024-07-14 10:44:51.946525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.025 [2024-07-14 10:44:51.946556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.025 qpair failed and we were unable to recover it. 00:36:07.025 [2024-07-14 10:44:51.946758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.025 [2024-07-14 10:44:51.946789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.025 qpair failed and we were unable to recover it. 00:36:07.025 [2024-07-14 10:44:51.946976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.025 [2024-07-14 10:44:51.947007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.025 qpair failed and we were unable to recover it. 00:36:07.025 [2024-07-14 10:44:51.947242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.025 [2024-07-14 10:44:51.947282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.025 qpair failed and we were unable to recover it. 00:36:07.025 [2024-07-14 10:44:51.947557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.025 [2024-07-14 10:44:51.947587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.025 qpair failed and we were unable to recover it. 00:36:07.025 [2024-07-14 10:44:51.947792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.301 [2024-07-14 10:44:51.947823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.301 qpair failed and we were unable to recover it. 00:36:07.301 [2024-07-14 10:44:51.948051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.301 [2024-07-14 10:44:51.948084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.301 qpair failed and we were unable to recover it. 00:36:07.301 [2024-07-14 10:44:51.948283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.301 [2024-07-14 10:44:51.948315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.301 qpair failed and we were unable to recover it. 00:36:07.301 [2024-07-14 10:44:51.948542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.301 [2024-07-14 10:44:51.948573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.301 qpair failed and we were unable to recover it. 00:36:07.301 [2024-07-14 10:44:51.948783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.301 [2024-07-14 10:44:51.948813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.301 qpair failed and we were unable to recover it. 00:36:07.301 [2024-07-14 10:44:51.949001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.301 [2024-07-14 10:44:51.949032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.301 qpair failed and we were unable to recover it. 00:36:07.301 [2024-07-14 10:44:51.949241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.301 [2024-07-14 10:44:51.949272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.301 qpair failed and we were unable to recover it. 00:36:07.301 [2024-07-14 10:44:51.949535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.301 [2024-07-14 10:44:51.949566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.301 qpair failed and we were unable to recover it. 00:36:07.301 [2024-07-14 10:44:51.949754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.301 [2024-07-14 10:44:51.949784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.301 qpair failed and we were unable to recover it. 00:36:07.301 [2024-07-14 10:44:51.949958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.301 [2024-07-14 10:44:51.949989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.301 qpair failed and we were unable to recover it. 00:36:07.301 [2024-07-14 10:44:51.950249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.301 [2024-07-14 10:44:51.950280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.301 qpair failed and we were unable to recover it. 00:36:07.301 [2024-07-14 10:44:51.950482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.301 [2024-07-14 10:44:51.950512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.950730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.950762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.950935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.950966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.951152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.951183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.951401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.951436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.951586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.951615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.951862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.951893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.952028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.952058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.952267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.952300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.952487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.952518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.952710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.952740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.952915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.952945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.953139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.953168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.953318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.953348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.953486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.953514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.953758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.953789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.954005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.954036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.954168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.954198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.954327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.954358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.954469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.954506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.954714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.954745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.954891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.954921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.955073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.955105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.955283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.955318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.955503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.955533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.955731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.955762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.956028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.956059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.956194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.956237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.956443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.956473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.956725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.956756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.956876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.956907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.957087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.957119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.957319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.957351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.957573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.957604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.957821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.957852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.958036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.958067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.958253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.958285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.958428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.958459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.958586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.958618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.958746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.958777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.958906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.958938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.302 [2024-07-14 10:44:51.959059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.302 [2024-07-14 10:44:51.959091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.302 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.959221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.959273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.959462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.959493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.959708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.959739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.959939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.959970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.960113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.960150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.960261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.960293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.960442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.960473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.960580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.960611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.960861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.960892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.961027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.961058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.961252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.961286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.961467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.961497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.961632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.961663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.961796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.961827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.961957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.961988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.962127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.962158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.962282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.962313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.962425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.962457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.962653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.962683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.962864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.962897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.963037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.963069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.963193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.963236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.963435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.963468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.963589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.963620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.963795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.963826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.964021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.964052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.964253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.964285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.964429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.964461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.964705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.964737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.964846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.964877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.964996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.965027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.965271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.965309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.965443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.965475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.965659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.965691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.965870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.965900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.966096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.966127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.966258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.966289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.966564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.966596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.966720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.966751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.966950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.966981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.967177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.303 [2024-07-14 10:44:51.967208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.303 qpair failed and we were unable to recover it. 00:36:07.303 [2024-07-14 10:44:51.967412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.967445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.967686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.967717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.967991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.968022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.968205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.968245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.968421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.968491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.968700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.968735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.968922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.968954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.969154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.969186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.969330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.969363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.969489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.969520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.969661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.969692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.969832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.969862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.970045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.970076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.970249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.970282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.970576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.970607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.970716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.970747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.970951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.970982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.971232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.971273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.971494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.971526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.971744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.971775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.971916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.971947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.972199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.972241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.972365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.972396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.972582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.972613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.972794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.972825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.973003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.973034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.973223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.973263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.973443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.973473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.973735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.973766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.973960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.973990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.974258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.974290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.974426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.974458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.974639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.974671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.974914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.974945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.975208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.975246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.975435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.975466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.975597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.975627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.975819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.975851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.976059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.976090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.976267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.976299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.304 [2024-07-14 10:44:51.976488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.304 [2024-07-14 10:44:51.976519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.304 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.976646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.976676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.976916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.976947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.977142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.977173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.977396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.977429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.977549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.977581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.977760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.977790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.977909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.977940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.978118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.978149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.978353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.978386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.978520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.978551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.978661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.978690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.978879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.978910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.979020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.979051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.979237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.979269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.979495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.979526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.979731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.979761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.979879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.979916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.980042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.980073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.980336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.980368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.980575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.980606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.980797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.980828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.981119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.981150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.981408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.981441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.981651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.981681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.981866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.981897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.982092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.982123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.982411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.982443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.982704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.982734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.982865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.982896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.983162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.983193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.983456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.983488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.983615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.983646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.983901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.983932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.984059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.984090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.984281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.984313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.305 qpair failed and we were unable to recover it. 00:36:07.305 [2024-07-14 10:44:51.984521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.305 [2024-07-14 10:44:51.984553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.984752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.984784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.985028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.985059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.985262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.985294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.985479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.985510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.985635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.985665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.985914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.985946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.986197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.986241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.986403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.986472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.986670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.986704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.986884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.986916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.987153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.987184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.987380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.987413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.987596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.987628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.987879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.987909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.988045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.988076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.988344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.988376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.988558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.988589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.988861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.988891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.989041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.989071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.989258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.989289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.989486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.989526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.989720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.989751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.989935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.989966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.990217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.990259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.990535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.990566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.990835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.990866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.990989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.991020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.991260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.991292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.991593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.991627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.991923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.991954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.992221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.992262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.992399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.992430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.992696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.992727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.992999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.993030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.993242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.993275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.993575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.993605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.993822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.993853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.994060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.306 [2024-07-14 10:44:51.994091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-07-14 10:44:51.994383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:51.994414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:51.994655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:51.994687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:51.994957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:51.994988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:51.995244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:51.995276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:51.995459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:51.995489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:51.995630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:51.995661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:51.995851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:51.995881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:51.996085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:51.996116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:51.996240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:51.996272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:51.996426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:51.996457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:51.996713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:51.996744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:51.996931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:51.996963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:51.997115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:51.997145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:51.997276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:51.997309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:51.997563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:51.997594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:51.997719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:51.997749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:51.998013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:51.998044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:51.998324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:51.998355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:51.998551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:51.998581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:51.998822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:51.998853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:51.998980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:51.999011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:51.999276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:51.999307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:51.999565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:51.999602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:51.999800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:51.999830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:52.000057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:52.000088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:52.000274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:52.000306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:52.000529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:52.000561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:52.000741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:52.000772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:52.000986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:52.001018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:52.001204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:52.001247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:52.001412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:52.001444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:52.001741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:52.001772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:52.002016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:52.002047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:52.002314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:52.002346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:52.002638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:52.002669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:52.002859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:52.002889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:52.003103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:52.003135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:52.003360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:52.003391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:52.003572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:52.003604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:52.003865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:52.003896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:52.004140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.307 [2024-07-14 10:44:52.004172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-07-14 10:44:52.004357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.004389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.004508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.004539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.004771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.004802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.004983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.005031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.005292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.005324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.005505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.005536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.005756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.005787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.005971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.006003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.006187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.006219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.006434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.006465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.006660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.006690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.006803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.006832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.007102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.007133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.007314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.007346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.007619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.007651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.007939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.007970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.008213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.008270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.008517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.008548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.008755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.008786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.008916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.008947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.009062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.009093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.009357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.009395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.009685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.009717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.009991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.010023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.010306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.010338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.010626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.010658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.010934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.010965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.011256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.011288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.011406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.011437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.011705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.011736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.011925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.011955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.012138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.012169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.012370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.012402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.012668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.012699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.012880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.012911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.013173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.013205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.013505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.013536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.013798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.013829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.014095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.014126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.014248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.014280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.014547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.308 [2024-07-14 10:44:52.014578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-07-14 10:44:52.014846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.014878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.014991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.015021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.015267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.015299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.015586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.015618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.015741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.015772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.015987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.016018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.016285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.016316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.016447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.016478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.016743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.016775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.016971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.017003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.017276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.017308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.017439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.017470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.017588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.017618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.017843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.017873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.018093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.018125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.018396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.018428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.018639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.018669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.018861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.018892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.019067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.019098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.019379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.019411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.019669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.019701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.019915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.019946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.020122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.020153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.020398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.020430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.020672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.020703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.020950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.020982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.021265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.021298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.021491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.021522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.021762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.021793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.022086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.022117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.022265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.022297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.022566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.022597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.022731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.022762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.022980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.023011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.023307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.023339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.023615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.023647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.023904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.023935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.024048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.024079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.024263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.024294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.024490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.024521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.309 [2024-07-14 10:44:52.024711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.309 [2024-07-14 10:44:52.024743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.309 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.024922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.024953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.025164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.025195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.025454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.025486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.025674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.025706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.025981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.026013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.026296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.026327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.026553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.026589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.026789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.026821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.027088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.027119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.027374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.027406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.027679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.027710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.027929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.027961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.028177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.028208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.028408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.028440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.028697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.028728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.028924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.028954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.029132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.029163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.029437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.029469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.029585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.029617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.029823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.029855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.030043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.030075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.030344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.030377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.030578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.030609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.030858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.030890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.031162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.031194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.031469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.031501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.031788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.031819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.032022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.032054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.032240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.032272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.032535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.032575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.032888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.032920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.033196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.033249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.033544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.033575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.033849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.033881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.034165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.034197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.310 [2024-07-14 10:44:52.034509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.310 [2024-07-14 10:44:52.034541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.310 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.034740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.034771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.034982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.035013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.035260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.035292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.035484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.035515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.035791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.035822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.036008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.036039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.036288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.036320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.036518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.036549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.036821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.036852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.037032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.037063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.037341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.037379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.037634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.037666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.037883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.037914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.038168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.038199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.038452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.038485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.038780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.038811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.038939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.038971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.039267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.039300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.039534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.039566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.039790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.039822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.040079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.040110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.040305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.040337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.040536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.040567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.040757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.040788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.040935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.040967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.041262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.041294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.041590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.041622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.041890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.041921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.042217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.042256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.042527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.042558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.042761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.042791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.043055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.043087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.043317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.043350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.043598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.043629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.043885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.043916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.044097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.044128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.044330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.044362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.044638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.044670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.044958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.044990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.045245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.311 [2024-07-14 10:44:52.045277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.311 qpair failed and we were unable to recover it. 00:36:07.311 [2024-07-14 10:44:52.045466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.045498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.045802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.045833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.046111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.046143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.046350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.046382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.046607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.046639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.046856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.046888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.047043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.047075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.047372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.047405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.047660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.047691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.047919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.047951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.048195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.048244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.048516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.048548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.048796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.048827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.049098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.049129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.049312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.049345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.049530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.049561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.049752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.049783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.050053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.050096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.050297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.050329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.050523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.050554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.050751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.050782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.050987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.051018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.051293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.051326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.051606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.051638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.051833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.051866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.052136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.052167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.052436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.052468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.052764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.052796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.052996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.053028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.053279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.053310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.053537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.053569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.053746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.053777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.054051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.054082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.054353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.054386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.054680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.054712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.054893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.054925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.055131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.055162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.055451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.055484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.055759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.055792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.056054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.056086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.056218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.312 [2024-07-14 10:44:52.056261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.312 qpair failed and we were unable to recover it. 00:36:07.312 [2024-07-14 10:44:52.056465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.056497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.056705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.056736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.056916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.056947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.057126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.057157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.057368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.057401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.057663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.057695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.057897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.057929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.058252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.058286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.058478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.058514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.058799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.058837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.059053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.059086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.059359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.059392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.059523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.059554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.059830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.059861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.060072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.060104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.060354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.060386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.060582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.060614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.060886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.060918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.061214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.061257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.061526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.061558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.061839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.061871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.062158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.062189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.062474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.062507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.062765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.062796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.063095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.063126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.063407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.063441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.063635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.063667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.063872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.063904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.064098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.064130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.064406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.064439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.064624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.064655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.064877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.064910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.065160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.065192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.065404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.065437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.065707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.065738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.065929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.065961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.066245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.066279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.066411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.066443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.066647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.066679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.066889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.066920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.067193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.313 [2024-07-14 10:44:52.067238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.313 qpair failed and we were unable to recover it. 00:36:07.313 [2024-07-14 10:44:52.067392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.067424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.067676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.067708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.067980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.068011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.068312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.068345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.068617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.068649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.068899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.068930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.069206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.069246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.069527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.069559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.069746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.069783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.069977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.070009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.070291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.070322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.070580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.070612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.070735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.070767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.071024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.071056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.071331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.071364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.071653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.071684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.071935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.071967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.072280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.072313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.072573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.072604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.072812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.072844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.073110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.073142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.073441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.073473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.073664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.073696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.073879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.073911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.074185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.074217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.074381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.074415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.074612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.074644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.074902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.074933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.075188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.075219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.075435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.075467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.075663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.075695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.075888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.075920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.076114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.076146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.076443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.076476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.076675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.076706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.314 qpair failed and we were unable to recover it. 00:36:07.314 [2024-07-14 10:44:52.076987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.314 [2024-07-14 10:44:52.077020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.077245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.077277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.077463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.077495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.077691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.077724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.077921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.077952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.078263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.078297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.078485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.078516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.078743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.078775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.078960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.078992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.079259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.079292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.079549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.079581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.079911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.079942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.080222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.080278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.080534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.080573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.080826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.080858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.081164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.081196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.081466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.081498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.081704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.081736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.081868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.081901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.082104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.082136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.082277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.082311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.082595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.082627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.082908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.082940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.083234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.083268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.083481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.083513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.083697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.083729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.083925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.083957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.084090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.084123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.084243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.084275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.084531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.084562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.084708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.084740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.085014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.085046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.085322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.085355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.085576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.085607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.085859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.085891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.086170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.086203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.086361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.086395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.086650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.086682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.086984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.087016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.087235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.087269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.087529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.315 [2024-07-14 10:44:52.087562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.315 qpair failed and we were unable to recover it. 00:36:07.315 [2024-07-14 10:44:52.087751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.087782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.088063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.088094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.088363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.088396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.088595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.088627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.088892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.088924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.089205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.089245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.089484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.089516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.089781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.089812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.089996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.090028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.090237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.090276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.090416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.090448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.090679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.090711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.090990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.091027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.091316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.091349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.091625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.091656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.091972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.092003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.092263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.092297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.092575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.092607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.092865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.092897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.093101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.093133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.093332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.093364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.093574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.093606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.093889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.093920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.094199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.094241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.094496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.094528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.094823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.094856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.095154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.095186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.095464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.095498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.095714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.095746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.096001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.096033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.096335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.096369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.096639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.096671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.096885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.096916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.097192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.097223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.097388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.097421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.097675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.097707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.097975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.098007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.098205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.098246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.098445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.098477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.098813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.098892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.099146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.316 [2024-07-14 10:44:52.099182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.316 qpair failed and we were unable to recover it. 00:36:07.316 [2024-07-14 10:44:52.099502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.099542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.099754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.099787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.099904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.099937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.100140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.100172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.100462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.100496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.100779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.100812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.101092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.101125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.101358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.101392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.101594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.101627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.101936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.101969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.102257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.102291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.102532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.102566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.102881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.102915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.103141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.103173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.103487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.103524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.103826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.103858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.104043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.104076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.104266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.104299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.104587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.104618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.104823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.104855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.105125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.105158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.105360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.105393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.105589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.105622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.105907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.105940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.106145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.106178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.106544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.106621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.106879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.106916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.107222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.107268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.107555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.107589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.107791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.107824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.108029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.108061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.108326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.108360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.108629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.108661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.108858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.108891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.109118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.109150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.109413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.109447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.109668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.109701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.109959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.109991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.110304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.110337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.110635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.110668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.110808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.110840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.317 qpair failed and we were unable to recover it. 00:36:07.317 [2024-07-14 10:44:52.111126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.317 [2024-07-14 10:44:52.111158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.111449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.111483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.111672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.111704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.111971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.112004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.112313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.112347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.112559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.112591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.112869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.112901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.113106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.113138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.113421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.113454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.113659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.113691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.113997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.114029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.114299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.114333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.114538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.114569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.114760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.114792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.115052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.115085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.115313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.115347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.115549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.115582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.115819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.115852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.116043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.116075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.116271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.116306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.116569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.116602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.116806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.116838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.116982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.117014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.117285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.117318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.117507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.117545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.117825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.117857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.118145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.118177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.118467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.118500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.118702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.118734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.119021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.119053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.119245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.119279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.119597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.119630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.119913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.119946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.120210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.120255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.120525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.120559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.120875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.318 [2024-07-14 10:44:52.120907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.318 qpair failed and we were unable to recover it. 00:36:07.318 [2024-07-14 10:44:52.121195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.121240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.121516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.121549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.121837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.121869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.122171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.122203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.122501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.122536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.122816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.122849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.123139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.123171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.123390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.123424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.123618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.123650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.123911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.123944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.124157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.124189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.124494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.124528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.124804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.124837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.125122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.125153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.125443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.125477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.125765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.125798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.126085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.126117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.126403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.126436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.126725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.126758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.127047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.127079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.127373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.127407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.127619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.127653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.127842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.127874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.128082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.128115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.128332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.128366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.128676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.128709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.128997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.129030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.129257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.129291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.129576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.129615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.129835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.129868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.130180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.130212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.130496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.130529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.130820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.130853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.131134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.131166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.131379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.131413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.131602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.131635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.131845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.131878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.132165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.132197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.132488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.132522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.132798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.132830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.319 [2024-07-14 10:44:52.133151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.319 [2024-07-14 10:44:52.133184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.319 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.133463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.133497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.133815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.133848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.134103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.134135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.134341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.134376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.134591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.134624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.134815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.134847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.135133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.135166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.135456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.135490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.135681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.135713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.135997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.136029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.136317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.136355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.136615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.136646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.136945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.136975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.137261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.137292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.137591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.137624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.137898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.137928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.138162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.138193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.138470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.138502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.138762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.138793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.138984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.139014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.139313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.139346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.139553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.139583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.139772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.139806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.139971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.140001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.140307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.140338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.140494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.140525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.140816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.140847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.141152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.141189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.141337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.141368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.141648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.141678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.141890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.141923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.142173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.142204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.142484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.142516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.142734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.142766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.143025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.143055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.143320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.143352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.143659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.143690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.143990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.144021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.144248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.144281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.144548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.144579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.320 [2024-07-14 10:44:52.144781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.320 [2024-07-14 10:44:52.144817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.320 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.145111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.145142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.145356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.145386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.145652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.145682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.145905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.145935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.146197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.146238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.146455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.146485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.146749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.146780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.146982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.147013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.147298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.147330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.147550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.147580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.147738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.147769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.147978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.148008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.148294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.148326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.148575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.148606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.148913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.148944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.149087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.149118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.149405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.149437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.149642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.149673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.149975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.150006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.150267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.150299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.150517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.150547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.150860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.150891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.151103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.151135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.151337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.151371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.151532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.151565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.151756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.151787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.152061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.152098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.152372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.152403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.152617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.152647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.152939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.152969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.153240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.153272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.153477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.153508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.153694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.153725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.153979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.154009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.154144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.154174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.154481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.154513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.154778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.154809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.155031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.155061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.155324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.155359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.155627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.155659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.321 [2024-07-14 10:44:52.155933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.321 [2024-07-14 10:44:52.155965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.321 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.156255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.156287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.156529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.156560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.156764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.156794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.157050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.157080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.157217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.157258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.157544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.157574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.157784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.157814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.158090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.158121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.158409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.158441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.158757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.158787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.158995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.159026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.159250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.159283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.159573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.159604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.159759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.159790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.160082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.160113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.160318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.160349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.160574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.160605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.160739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.160770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.161094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.161124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.161364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.161396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.161680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.161710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.161984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.162014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.162222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.162262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.162494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.162526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.162735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.162766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.162903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.162939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.163139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.163169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.163471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.163503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.163768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.163798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.164010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.164041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.164317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.164349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.164543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.164574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.164762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.164792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.165011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.165041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.165284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.165316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.165515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.165548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.165754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.165785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.166072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.166103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.322 [2024-07-14 10:44:52.166404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.322 [2024-07-14 10:44:52.166435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.322 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.166652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.166682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.166896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.166927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.167260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.167292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.167578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.167608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.167898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.167928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.168250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.168282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.168540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.168570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.168847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.168877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.169135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.169166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.169386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.169418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.169616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.169646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.169904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.169935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.170248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.170280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.170580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.170612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.170846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.170876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.171092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.171123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.171263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.171296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.171434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.171463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.171734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.171764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.171979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.172010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.172220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.172264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.172475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.172506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.172657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.172687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.172971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.173000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.173306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.173337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.173612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.173642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.173874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.173911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.174108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.174139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.174422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.174453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.174677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.174707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.174965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.174994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.175297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.175328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.175518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.175548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.175812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.175842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.176105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.176136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.176368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.176400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.176648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.176679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.323 qpair failed and we were unable to recover it. 00:36:07.323 [2024-07-14 10:44:52.176816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.323 [2024-07-14 10:44:52.176846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.177133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.177163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.177381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.177413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.177681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.177711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.177852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.177882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.178137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.178166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.178382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.178414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.178624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.178655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.178866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.178896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.179160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.179190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.179401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.179433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.179707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.179737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.179959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.179989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.180252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.180284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.180608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.180640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.180890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.180920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.181116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.181146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.181295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.181326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.181530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.181559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.181788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.181819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.182073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.182104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.182246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.182277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.182481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.182510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.182716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.182745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.182970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.183001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.183302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.183333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.183611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.183641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.183830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.183860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.184046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.184076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.184371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.184409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.184575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.184605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.184945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.184976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.185279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.185310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.185603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.185634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.185931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.185961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.186117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.186147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.186345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.186377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.186514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.186543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.186802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.186832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.187027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.187058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.187321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.324 [2024-07-14 10:44:52.187352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.324 qpair failed and we were unable to recover it. 00:36:07.324 [2024-07-14 10:44:52.187559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.187589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.187829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.187859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.188009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.188039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.188247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.188280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.188496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.188526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.188781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.188812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.189011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.189041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.189326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.189358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.189583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.189614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.189756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.189786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.189996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.190026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.190257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.190288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.190566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.190597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.190761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.190791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.191087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.191117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.191274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.191306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.191541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.191571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.191801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.191831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.192094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.192124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.192409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.192441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.192753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.192784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.193002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.193032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.193187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.193217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.193500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.193531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.193846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.193876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.194092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.194123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.194315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.194347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.194552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.194582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.194793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.194828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.195094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.195124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.195389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.195421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.195560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.195590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.195792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.195823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.196078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.196108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.196323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.196355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.196639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.196668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.196796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.196826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.197110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.197140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.197350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.197383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.197641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.197671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.197900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.325 [2024-07-14 10:44:52.197931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.325 qpair failed and we were unable to recover it. 00:36:07.325 [2024-07-14 10:44:52.198079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.198109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.198407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.198439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.198649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.198680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.198867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.198897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.199152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.199182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.199517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.199549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.199808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.199838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.200151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.200181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.200490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.200522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.200728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.200758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.201001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.201031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.201297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.201329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.201528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.201560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.201772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.201802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.202046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.202076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.202213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.202254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.202468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.202498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.202762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.202793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.203082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.203112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.203321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.203353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.203616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.203646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.203869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.203900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.204161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.204192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.204386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.204417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.204637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.204668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.204966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.204996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.205205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.205248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.205466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.205502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.205701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.205731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.206072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.206103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.206307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.206339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.206549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.206579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.206728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.206759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.207118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.207148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.207430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.207462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.207675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.207706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.207893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.207923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.208121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.208151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.208355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.208387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.208592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.208622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.208882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.326 [2024-07-14 10:44:52.208912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.326 qpair failed and we were unable to recover it. 00:36:07.326 [2024-07-14 10:44:52.209074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.209104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.209350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.209381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.209527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.209557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.209788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.209819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.210035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.210065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.210376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.210407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.210623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.210655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.210929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.210958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.211104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.211135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.211459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.211490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.211697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.211728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.211874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.211905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.212212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.212254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.212536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.212566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.212865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.212896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.213175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.213205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.213480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.213511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.213818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.213848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.214119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.214150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.214458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.214490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.214621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.214652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.214916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.214947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.215158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.215188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.215389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.215420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.215628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.215657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.215936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.215966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.216236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.216272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.216560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.216592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.216883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.216914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.217201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.217240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.217389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.217418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.217709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.217739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.217872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.217902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.218215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.218258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.218456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.218485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.218644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.327 [2024-07-14 10:44:52.218674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.327 qpair failed and we were unable to recover it. 00:36:07.327 [2024-07-14 10:44:52.218949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.218982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.219261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.219293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.219594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.219625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.219928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.219959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.220238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.220270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.220483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.220513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.220678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.220708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.220906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.220937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.221236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.221267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.221552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.221582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.221805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.221835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.222101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.222131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.222328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.222360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.222620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.222650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.222861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.222891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.223202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.223244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.223486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.223516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.223784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.223814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.224026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.224057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.224278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.224309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.224592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.224622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.224940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.224971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.225155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.225185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.225485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.225516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.225686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.225717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.225869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.225900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.226101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.226131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.226391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.226422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.226578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.226608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.226818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.226848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.226982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.227020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.227269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.227300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.227435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.227466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.227705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.227736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.227960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.227990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.228182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.228212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.228433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.228464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.228672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.228703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.228858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.228888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.229092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.229122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.328 [2024-07-14 10:44:52.229318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.328 [2024-07-14 10:44:52.229349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.328 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.229566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.229598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.229958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.229988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.230267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.230298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.230590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.230621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.230866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.230896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.231169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.231199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.231465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.231496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.231666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.231697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.231839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.231869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.232015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.232045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.232331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.232362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.232576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.232606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.232763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.232794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.233079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.233109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.233264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.233296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.233514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.233544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.233794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.233825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.234046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.234075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.234342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.234375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.234574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.234604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.234859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.234889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.235087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.235117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.235412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.235444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.235593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.235622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.235833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.235863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.236162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.236193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.236492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.236523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.236784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.236815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.237102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.237132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.237345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.237381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.237569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.237599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.237810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.237839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.238144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.238175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.238434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.238465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.238710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.238740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.239014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.239044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.239327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.239359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.239576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.239605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.239873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.239903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.240244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.240277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.329 qpair failed and we were unable to recover it. 00:36:07.329 [2024-07-14 10:44:52.240436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.329 [2024-07-14 10:44:52.240466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.240727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.240757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.241075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.241105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.241436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.241468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.241620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.241650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.241931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.241961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.242180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.242210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.242450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.242480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.242615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.242646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.242970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.243000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.243183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.243213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.243508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.243538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.243754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.243785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.244053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.244082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.244376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.244408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.244573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.244603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.244885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.244916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.245209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.245267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.245535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.245565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.245773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.245802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.246071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.246101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.246372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.246404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.246556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.246586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.246796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.246827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.247040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.247070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.247287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.247318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.247584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.247615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.247778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.247809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.248090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.248121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.248319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.248356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.248677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.248707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.248916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.248946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.249259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.249290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.249583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.249614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.249853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.249882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.250016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.250046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.250265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.250296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.250510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.250540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.250770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.250802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.251036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.251066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.251336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.330 [2024-07-14 10:44:52.251368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.330 qpair failed and we were unable to recover it. 00:36:07.330 [2024-07-14 10:44:52.251706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.251735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.251950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.251980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.252219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.252264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.252408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.252439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.252591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.252621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.252953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.252983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.253282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.253315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.253575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.253606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.253739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.253770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.254021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.254051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.254289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.254320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.254575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.254605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.254916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.254946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.255153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.255184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.255480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.255511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.255716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.255752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.256103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.256134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.256355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.256386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.256643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.256674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.256941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.256971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.257106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.257137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.257405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.257436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.257649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.257680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.257871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.257901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.258159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.258189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.258436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.258467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.258696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.258731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.258945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.258977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.259277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.259309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.259573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.259604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.259743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.259773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.260053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.260083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.260243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.260275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.260500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.260530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.260685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.260715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.260933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.331 [2024-07-14 10:44:52.260963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.331 qpair failed and we were unable to recover it. 00:36:07.331 [2024-07-14 10:44:52.261265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.332 [2024-07-14 10:44:52.261298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.332 qpair failed and we were unable to recover it. 00:36:07.332 [2024-07-14 10:44:52.261514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.332 [2024-07-14 10:44:52.261545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.332 qpair failed and we were unable to recover it. 00:36:07.332 [2024-07-14 10:44:52.261832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.332 [2024-07-14 10:44:52.261862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.332 qpair failed and we were unable to recover it. 00:36:07.332 [2024-07-14 10:44:52.262069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.332 [2024-07-14 10:44:52.262099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.332 qpair failed and we were unable to recover it. 00:36:07.332 [2024-07-14 10:44:52.262403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.332 [2024-07-14 10:44:52.262434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.332 qpair failed and we were unable to recover it. 00:36:07.332 [2024-07-14 10:44:52.262634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.332 [2024-07-14 10:44:52.262664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.332 qpair failed and we were unable to recover it. 00:36:07.332 [2024-07-14 10:44:52.262911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.332 [2024-07-14 10:44:52.262942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.332 qpair failed and we were unable to recover it. 00:36:07.332 [2024-07-14 10:44:52.263238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.332 [2024-07-14 10:44:52.263271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.332 qpair failed and we were unable to recover it. 00:36:07.332 [2024-07-14 10:44:52.263418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.332 [2024-07-14 10:44:52.263450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.332 qpair failed and we were unable to recover it. 00:36:07.332 [2024-07-14 10:44:52.263657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.332 [2024-07-14 10:44:52.263687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.332 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.263979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.264010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.264143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.264175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.265590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.265641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.265946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.265979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.266252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.266285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.266439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.266470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.266650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.266681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.266965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.266996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.267138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.267170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.267359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.267415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.267562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.267593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.267802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.267832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.268090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.268121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.268265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.268297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.268450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.268480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.268777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.268809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.269015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.269046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.269307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.269342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.269555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.269587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.269720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.269750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.269954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.269984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.270249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.270282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.270517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.270548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.270677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.270707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.270897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.270927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.271129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.271160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.271347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.271378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.271541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.271571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.271712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.271744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.271961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.271991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.272276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.272307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.272507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.272537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.272684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.272715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.272940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.272971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.273210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.273265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.273495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.273537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.273792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.273838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.274101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.274148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.274463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-07-14 10:44:52.274507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-07-14 10:44:52.274711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.274749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.274995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.275036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.275384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.275422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.275586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.275617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.275832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.275863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.276146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.276176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.276334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.276366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.276628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.276658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.276805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.276836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.277098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.277129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.277367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.277408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.277674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.277704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.277934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.277964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.278251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.278284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.278427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.278457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.278625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.278655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.278879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.278910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.279212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.279262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.279431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.279461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.279652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.279682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.279958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.279989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.280198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.280243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.280440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.280472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.280656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.280687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.280963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.280995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.281220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.281287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.281453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.281484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.281651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.281684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.281894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.281925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.282255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.282289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.282506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.282538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.282820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.282850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.283051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.283081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.283374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.283407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.283610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.283640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.283849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.283879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.284070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.284100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.284370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.284403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.284548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.284577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.284704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.284735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.285047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-07-14 10:44:52.285078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-07-14 10:44:52.285305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.285337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.285491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.285521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.285729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.285759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.285993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.286023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.286247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.286280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.286437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.286467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.286675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.286706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.286916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.286947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.287148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.287177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.287346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.287384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.287598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.287629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.287865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.287895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.288184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.288215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.288393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.288425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.288626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.288656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.288983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.289013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.289302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.289337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.289463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.289493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.289752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.289782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.290107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.290137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.290429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.290461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.290745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.290774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.290995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.291025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.291254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.291287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.291524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.291554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.291715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.291745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.291876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.291906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.292093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.292122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.292361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.292394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.292630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.292659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.292869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.292900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.293159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.293190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.293467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.293538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.293863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.293897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.294115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.294145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.294386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.294420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.294689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.294720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.294870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.294900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.295106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.295137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.295379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.295410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-07-14 10:44:52.295631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-07-14 10:44:52.295663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.295941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.295971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.296271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.296302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.296508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.296539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.296712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.296742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.297034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.297064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.297375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.297406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.297555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.297586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.297793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.297824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.297976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.298013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.298152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.298182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.298348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.298380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.298527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.298558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.298744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.298774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.299056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.299086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.299359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.299391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.299554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.299584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.299795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.299826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.300133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.300163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.300387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.300419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.300633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.300664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.300886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.300916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.301040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.301070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.301284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.301316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.301527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.301556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.301749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.301780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.302079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.302110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.302374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.302406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.302629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.302659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.302802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.302833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.303150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.303180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.303477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.303509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.303722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.303753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.303882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.303912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.304191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.304221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.304497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.304527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.304741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.304771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.305055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.305085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.305294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.305325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.305615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.305646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.305797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.305827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-07-14 10:44:52.306133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-07-14 10:44:52.306163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.306391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.306424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.306572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.306603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.306880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.306910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.307142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.307172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.307337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.307370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.307584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.307614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.307844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.307874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.308099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.308135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.308420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.308451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.308737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.308768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.308994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.309024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.309305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.309337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.309547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.309577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.309839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.309870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.310158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.310189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.310417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.310449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.310678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.310708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.310935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.310966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.311166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.311196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.311348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.311379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.311607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.311636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.311850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.311881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.312034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.312064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.312355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.312386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.312644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.312674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.312981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.313013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.313209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.313268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.313554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.313585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.313876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.313906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.314059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.314089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.314216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.314258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.314537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.314567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.314851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-07-14 10:44:52.314881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-07-14 10:44:52.315096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.315126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.315320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.315352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.315564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.315594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.315731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.315762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.316061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.316091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.316386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.316418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.316628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.316659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.316946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.316977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.317179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.317210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.317501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.317531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.317718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.317748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.318045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.318075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.318364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.318395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.318700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.318731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.319046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.319082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.319232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.319263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.319426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.319457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.319737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.319767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.320071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.320101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.320329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.320360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.320574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.320605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.320823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.320853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.321120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.321151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.321461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.321493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.321656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.321687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.321927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.321957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.322215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.322254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.322465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.322495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.322714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.322745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.323026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.323057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.323207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.323246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.323387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.323418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.323610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.323639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.323908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.323939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.324142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.324172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.324370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.324401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.324667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.324698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.324994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.325024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.325319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.325351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.325560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.325590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.325730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-07-14 10:44:52.325761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-07-14 10:44:52.325996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.326026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.326314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.326345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.326607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.326637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.326779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.326810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.327097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.327128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.327384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.327415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.327543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.327574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.327809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.327840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.328051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.328082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.328344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.328375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.328583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.328613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.328880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.328910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.329055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.329086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.329296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.329334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.329616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.329646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.329790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.329820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.330110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.330141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.330289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.330321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.330604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.330634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.330911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.330941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.331249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.331280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.331489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.331521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.331800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.331831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.332073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.332104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.332390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.332421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.332610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.332640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.332833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.332863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.333083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.333113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.333359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.333390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.333599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.333630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.333844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.333875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.334071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.334101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.334389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.334419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.334591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.334622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.334832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.334861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.335123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.335153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.335375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.335407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.335690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.335721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.335946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.335976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.336186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.336217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.336387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-07-14 10:44:52.336419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-07-14 10:44:52.336645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.336675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.336877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.336907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.337116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.337146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.337368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.337399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.337627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.337658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.337863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.337893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.338176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.338208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.338437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.338467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.338729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.338759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.338936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.338967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.339235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.339266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.339454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.339484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.339681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.339716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.339925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.339956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.340178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.340209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.340416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.340447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.340655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.340685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.340962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.340992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.341254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.341285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.341495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.341526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.341723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.341754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.341969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.342000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.342296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.342327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.342518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.342547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.342754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.342784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.342938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.342968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.343249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.343280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.343541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.343571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.343712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.343743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.343936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.343967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.344181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.344212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.344473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.344504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.344649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.344679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.344954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.344984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.345258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.345290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.345545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.345575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.345852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.345882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.346189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.346219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.346492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.346522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.346674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.346705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-07-14 10:44:52.347026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-07-14 10:44:52.347057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.347283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.347314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.347543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.347573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.347723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.347754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.347967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.347997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.348200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.348237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.348447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.348478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.348635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.348665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.348945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.348974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.349181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.349212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.349449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.349481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.349737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.349767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.349915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.349950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.350170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.350200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.350430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.350460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.350673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.350703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.350963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.350993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.351185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.351216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.351523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.351554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.351824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.351854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.352078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.352108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.352316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.352348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.352559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.352590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.352882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.352912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.353184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.353214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.353451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.353482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.353751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.353783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.354008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.354038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.354317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.354349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.354498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.354528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.354669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.354700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.354975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.355005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.355214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.355253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.355444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.355474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.355685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.355716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.355855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.355885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-07-14 10:44:52.356072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-07-14 10:44:52.356102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.356370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.356403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.356566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.356597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.356850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.356881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.357083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.357113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.357367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.357398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.357550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.357580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.357927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.357958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.358244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.358275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.358467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.358498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.358692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.358722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.358976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.359006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.359316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.359348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.359600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.359630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.359795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.359826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.360135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.360165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.360384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.360422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.360694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.360724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.361019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.361049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.361334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.361367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.361576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.361606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.361894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.361924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.362155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.362185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.362340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.362371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.362630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.362660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.362811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.362840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.363030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.363059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.363328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.363360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.363618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.363648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.363960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.363990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.364268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.364299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.364455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.364485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.364745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.364775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.365069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.365099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.365337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.365368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.365589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.365619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.365819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.365849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.366056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.366086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.366326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.366358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.366499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.366530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.366744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-07-14 10:44:52.366774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-07-14 10:44:52.367092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.367122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.367386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.367418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.367684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.367716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.368012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.368042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.368327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.368359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.368521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.368550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.368710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.368740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.368983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.369013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.369294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.369325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.369536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.369566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.369769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.369800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.369936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.369966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.370234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.370265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.370603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.370633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.370843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.370874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.371079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.371113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.371318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.371348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.371508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.371539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.371682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.371712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.371985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.372015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.372236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.372268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.372541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.372572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.372772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.372802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.373059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.373089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.373290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.373322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.373608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.373639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.373907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.373936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.374173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.374203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.374440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.374472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.374686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.374717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.374870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.374901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.375234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.375266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.375468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.375498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.375784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.375814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.376097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.376128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.376287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.376319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.376581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.376611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.376825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.376855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.377129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.377158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.377440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.377472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.377692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-07-14 10:44:52.377722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-07-14 10:44:52.377963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.377993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.378319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.378351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.378611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.378642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.378926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.378957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.379168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.379197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.379373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.379403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.379613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.379643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.379829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.379859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.380121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.380152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.380445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.380475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.380632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.380663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.380800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.380830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.381033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.381064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.381310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.381342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.381536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.381571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.381771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.381801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.382091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.382122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.382396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.382427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.382735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.382766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.382975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.383006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.383322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.383353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.383518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.383549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.383735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.383765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.383995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.384026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.384291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.384325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.384626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.384659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.384871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.384901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.385159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.385189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.385449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.385481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.385687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.385718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.385978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.386008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.386211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.386251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.386450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.386481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.386691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.386722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.387004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.387034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.387268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.387301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.387576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.387607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.387763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.387794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.388096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.388126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.388281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.388312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.388527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-07-14 10:44:52.388558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-07-14 10:44:52.388769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.388804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.388935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.388966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.389107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.389137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.389358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.389390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.389542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.389572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.389799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.389829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.390109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.390140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.390361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.390392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.390630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.390661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.390871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.390902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.391159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.391190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.391413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.391445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.391654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.391685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.391877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.391907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.392076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.392107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.392327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.392360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.392571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.392602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.392810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.392840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.393044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.393075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.393386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.393418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.393641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.393671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.393943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.393974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.394211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.394261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.394502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.394533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.394814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.394844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.395109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.395138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.395347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.395379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.395609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.395640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.395810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.395840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.396122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.396152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.396434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.396466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.396730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.396761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.396983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.397013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.397286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.397317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.397624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.397654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-07-14 10:44:52.397934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-07-14 10:44:52.397965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.398244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.398276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.398546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.398576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.398779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.398809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.399093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.399122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.399324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.399361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.399617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.399647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.399835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.399864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.400155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.400185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.400480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.400512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.400775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.400805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.401013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.401044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.401266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.401298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.401501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.401531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.401810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.401841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.402127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.402158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.402341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.402372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.402632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.402662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.402940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.402971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.403147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.403176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.403386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.403417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.403618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.403648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.403876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.403907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.404188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.404219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.404541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.404572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.404856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.404887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.405130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.405160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.405445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.405477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.405758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.405788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.406087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.406118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.406408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.406439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.406686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.406717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.406962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.406992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.407220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.407259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.407522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.407553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.407856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.407887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.408171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.408202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.408428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.408459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.408615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.408646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.408856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.408886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.409096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.409126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-07-14 10:44:52.409350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-07-14 10:44:52.409382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.409537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.409567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.409826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.409857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.410168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.410199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.410420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.410468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.410630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.410662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.410957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.410988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.411283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.411315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.411465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.411495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.411703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.411733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.412016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.412046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.412258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.412289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.412446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.412476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.412630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.412661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.412900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.412930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.413220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.413259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.413538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.413568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.413698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.413728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.413950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.413980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.414124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.414154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.414368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.414399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.414605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.414636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.414844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.414874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.415072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.415103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.415303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.415335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.415495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.415526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.415647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.415677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.416021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.416051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.416335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.416366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.416531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.416561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.416769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.416800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.417018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.417049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.417260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.417293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.417501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.417533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.417664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.417694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.417911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.417941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.418234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.418264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.418473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.418503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.418710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.418740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.418967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.418998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.419200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.419241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-07-14 10:44:52.419503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-07-14 10:44:52.419534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.419765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.419795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.420140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.420169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.420461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.420498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.420781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.420811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.421123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.421153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.421435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.421467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.421728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.421758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.421989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.422019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.422274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.422305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.422616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.422646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.422800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.422831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.423157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.423186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.423415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.423447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.423716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.423748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.423987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.424017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.424220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.424275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.424541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.424572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.424787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.424817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.425083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.425114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.425318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.425349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.425561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.425591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.425901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.425932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.426205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.426243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.426446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.426476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.426740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.426769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.426976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.427006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.427278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.427309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.427591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.427621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.427783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.427813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.428110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.428140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.428369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.428400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.428545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.428575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.428845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.428875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.429142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.429171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.429316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.429348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.429610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.429641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.429884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.429914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.430108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.430139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.430424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.430456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.430687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.430717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-07-14 10:44:52.430837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-07-14 10:44:52.430868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.431152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.431182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.431348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.431384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.431606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.431636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.431847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.431877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.432131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.432162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.432352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.432383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.432606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.432636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.432900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.432930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.433262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.433293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.433550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.433580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.433774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.433804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.434112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.434142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.434420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.434452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.434650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.434680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.434867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.434897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.435179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.435209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.435507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.435538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.435740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.435770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.436014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.436044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.436311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.436342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.436552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.436582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.436781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.436811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.437114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.437143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.437425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.437456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.437617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.437648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.437913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.437943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.438086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.438116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.438254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.438285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.438483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.438513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.438709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.438739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.439042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.439072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.439305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.439335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.439538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.439568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.439776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.439806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.440081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.440111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.440376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.440408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.440629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.440659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.440942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.440972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.441238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.441269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-07-14 10:44:52.441552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-07-14 10:44:52.441581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.441802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.441832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.442040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.442074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.442280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.442312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.442602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.442633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.442850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.442880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.443069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.443100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.443296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.443326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.443517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.443548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.443785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.443815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.443966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.443996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.444219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.444257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.444471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.444502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.444765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.444796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.444984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.445014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.445223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.445275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.445405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.445435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.445666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.445696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.446013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.446043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.446250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.446282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.446563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.446595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.446801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.446831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.447096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.447127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.447406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.447437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.447639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.447669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.447882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.447912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.448109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.448138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.448439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.448470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.448675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.448705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.448942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.448972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.449256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.449288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.449479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.449509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.449670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.449701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.449973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.450003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.450247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.450278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.450421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.450451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.450719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-07-14 10:44:52.450748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-07-14 10:44:52.451061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.451091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.451240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.451272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.451483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.451514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.451724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.451754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.451972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.452002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.452285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.452323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.452542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.452572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.452761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.452791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.453053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.453083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.453385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.453416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.453695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.453725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.453951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.453981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.454263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.454294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.454550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.454580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.454741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.454771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.454965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.454995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.455242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.455274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.455413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.455443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.455679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.455709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.456057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.456088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.456342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.456373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.456661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.456691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.457017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.457047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.457376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.457408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.457554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.457584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.457712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.457743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.458000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.458031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.458294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.458326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.458545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.458576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.458730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.458760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.459014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.459044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.459321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.459352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.459506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.459537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.459739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.459770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.459917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.459947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.460244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.460276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.460433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.460463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.460669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.460699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.460884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.460914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.461122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.461151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.461380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-07-14 10:44:52.461412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-07-14 10:44:52.461607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.461638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.461797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.461827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.462109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.462139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.462357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.462388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.462599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.462635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.462900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.462930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.463131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.463161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.463378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.463409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.463672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.463703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.463919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.463950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.464237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.464268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.464418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.464448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.464588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.464618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.464760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.464790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.465078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.465108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.465412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.465443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.465652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.465682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.465904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.465934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.466085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.466115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.466315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.466347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.466604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.466634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.466957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.466988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.467276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.467307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.467498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.467528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.467669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.467699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.467859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.467888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.468078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.468109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.468302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.468333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.468539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.468569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.468715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.468745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.468971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.469001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.469310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.469343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.469478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.469508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.469665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.469695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.469892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.469922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.470203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.470246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.470438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.470468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.470690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.470720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.470934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.470965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.471236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.471268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-07-14 10:44:52.471486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-07-14 10:44:52.471517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.471723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.471753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.472069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.472098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.472247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.472278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.472502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.472537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.472697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.472728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.473024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.473055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.473277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.473308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.473460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.473489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.473680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.473709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.473988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.474018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.474294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.474325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.474588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.474620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.474822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.474852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.475141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.475172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.475445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.475476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.475629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.475659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.475882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.475912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.476203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.476240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.476395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.476425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.476579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.476610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.476822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.476852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.477107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.477138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.477338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.477370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.477576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.477606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.477747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.477777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.478106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.478135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.478347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.478379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.478531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.478561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.478770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.478801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.479006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.479035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.479332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.479364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.479532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.479562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.479698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.479728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.480013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.480043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.480305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.480337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.480549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.480578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.480793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.480824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.481084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.481114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.481323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.481354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.481516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.481546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-07-14 10:44:52.481700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-07-14 10:44:52.481729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.481989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.482018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.482223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.482261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.482411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.482446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.482645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.482675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.482890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.482920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.483135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.483164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.483435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.483467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.483724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.483754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.483983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.484013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.484299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.484330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.484459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.484489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.484773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.484803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.485021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.485051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.485317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.485350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.485560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.485589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.485732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.485762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.485901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.485931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.486125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.486155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.486357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.486389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.486591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.486621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.486827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.486857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.487142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.487173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.487380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.487413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.487564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.487594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.487816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.487845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.488152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.488183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.488398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.488428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.488709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.488740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.489000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.489030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.489284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.489317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.489541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.489571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.489704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.489733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.489976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.490006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.490248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.490279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.490534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-07-14 10:44:52.490564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-07-14 10:44:52.490711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.490741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.491011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.491041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.491304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.491335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.491641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.491672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.491869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.491900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.492088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.492118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.492334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.492365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.492509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.492543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.492756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.492786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.493091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.493121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.493330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.493362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.493572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.493602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.493807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.493838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.494039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.494069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.494286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.494318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.494507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.494537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.494825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.494855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.495117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.495147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.495450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.495481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.495782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.495814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.496034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.496064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.496326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.496357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.496508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.496537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.496817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.496847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.497102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.497132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.497413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.497444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.497651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.497682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.497896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.497927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.498132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.498162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.498428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.498459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.498672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.498702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.499014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.499044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.499328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.499359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.499591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.499621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.499842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.499912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.500077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.500112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.500320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.500354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.500549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.500580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.500751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.500782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.500929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.500958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.501243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-07-14 10:44:52.501275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-07-14 10:44:52.501435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.501466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.501730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.501761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.502031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.502062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.502331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.502364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.502509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.502539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.502828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.502860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.503158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.503202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.503495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.503525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.503800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.503832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.504095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.504126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.504419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.504451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.504674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.504705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.504995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.505025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.505248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.505280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.505411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.505441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.505703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.505734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.505965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.505995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.506189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.506220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.506436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.506467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.506667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.506698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.506920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.506950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.507157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.507188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.507480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.507511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.507732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.507763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.507986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.508016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.508313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.508345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.508510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.508540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.508703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.508736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.508999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.509030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.509267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.509299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.509499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.509529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.509670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.509700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.510011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.510043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.510208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.510248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.510440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.510470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.510612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.510643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.510797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.510829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.511042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.511074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.511280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.511313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.511474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.511504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.632 [2024-07-14 10:44:52.511669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.632 [2024-07-14 10:44:52.511699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.632 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.512021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.512054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.512265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.512297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.512499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.512531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.512694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.512725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.512872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.512903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.513048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.513084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.513323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.513355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.513498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.513529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.513690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.513721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.514001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.514031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.514240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.514271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.514419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.514449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.514727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.514757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.514973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.515004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.515296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.515329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.515540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.515570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.515808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.515840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.516099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.516132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.516339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.516371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.516567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.516598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.516911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.516942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.517173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.517204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.517412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.517445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.517599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.517630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.517798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.517829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.518058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.518090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.518380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.518413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.518573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.518603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.518733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.518763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.519065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.519096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.519338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.519371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.519515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.519547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.519755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.519798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.520083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.520115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.520332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.520363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.520575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.520606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.520882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.520913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.521099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.521129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.521319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.521351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.521614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.521646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.633 [2024-07-14 10:44:52.521841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.633 [2024-07-14 10:44:52.521872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.633 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.522079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.522109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.522318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.522351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.522499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.522529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.522831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.522863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.523063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.523093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.523327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.523360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.523485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.523516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.523692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.523724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.523926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.523957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.524217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.524260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.524399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.524431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.524650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.524680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.524956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.524986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.525177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.525208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.525369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.525400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.525570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.525600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.525802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.525833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.526043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.526074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.526285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.526317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.526475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.526507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.526716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.526747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.526863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.526893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.527025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.527055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.527265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.527295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.527495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.527525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.527764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.527796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.527952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.527983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.528118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.528150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.528304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.528335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.528469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.528499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.528628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.528660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.528803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.528838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.528986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.529016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.529209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.529250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.529397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.529428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.634 [2024-07-14 10:44:52.529585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.634 [2024-07-14 10:44:52.529616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.634 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.529741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.529772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.529964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.529995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.530109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.530139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.530421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.530455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.530666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.530696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.530884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.530914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.531191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.531221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.531448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.531478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.531665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.531695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.531843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.531874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.532028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.532060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.532313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.532345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.532544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.532574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.532772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.532803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.533010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.533042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.533271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.533304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.533440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.533469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.533673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.533704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.533839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.533870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.534014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.534045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.534248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.534279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.534482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.534511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.534655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.534687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.534846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.534877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.535068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.535099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.535237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.535268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.535490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.535521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.535738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.535769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.535920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.535952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.536155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.536185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.536319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.536350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.536501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.536531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.536748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.536780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.537011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.537042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.537183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.537214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.537423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.537460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.537582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.537612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.537745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.537775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.538040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.538071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.538220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.538290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.635 qpair failed and we were unable to recover it. 00:36:07.635 [2024-07-14 10:44:52.538486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.635 [2024-07-14 10:44:52.538517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.538738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.538770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.538903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.538933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.539129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.539160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.539347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.539378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.539525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.539558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.539694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.539723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.540019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.540051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.540205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.540243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.540440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.540469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.540615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.540645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.540955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.540988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.541106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.541137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.541331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.541362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.541525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.541555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.541754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.541784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.541975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.542005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.542135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.542166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.542326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.542360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.542596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.542628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.542820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.542850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.543165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.543253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.543556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.543591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.543882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.543914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.544060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.544091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.544238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.544270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.544422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.544454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.544594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.544625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.544917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.544947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.545165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.545196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.545319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.545352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.545552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.545584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.545728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.545758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.545881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.545912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.546114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.546145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.546413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.546453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.546739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.546770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.546973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.547005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.547137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.547167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.547303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.547337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.547486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.547517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.636 [2024-07-14 10:44:52.547653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.636 [2024-07-14 10:44:52.547684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.636 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.547837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.547868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.548145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.548176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.548331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.548363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.548512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.548542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.548764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.548794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.548934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.548965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.549159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.549189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.549328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.549363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.549494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.549523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.549720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.549750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.549893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.549922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.550116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.550147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.550269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.550300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.550510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.550539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.550679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.550710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.550898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.550928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.551064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.551094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.551286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.551317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.551513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.551544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.551663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.551694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.551900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.551937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.552089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.552118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.552265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.552297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.552444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.552475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.552671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.552700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.552904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.552934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.553080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.553109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.553293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.553325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.553474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.553505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.553642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.553671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.553795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.553826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.554012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.554043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.554167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.554196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.554400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.554430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.554631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.554662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.554864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.554894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.555013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.555044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.555255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.555286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.555484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.555513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.555703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.555733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.637 [2024-07-14 10:44:52.555863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.637 [2024-07-14 10:44:52.555894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.637 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.556085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.556116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.556345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.556377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.556525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.556555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.556682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.556712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.556904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.556935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.557114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.557144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.557422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.557458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.557660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.557692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.557984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.558016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.558271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.558302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.558557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.558589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.558793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.558823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.559013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.559043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.559297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.559329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.559530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.559561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.559840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.559871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.560131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.560163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.560364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.560396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.560537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.560568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.560756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.560787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.561063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.561094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.561339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.561371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.561591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.561622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.561777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.561807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.562036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.562068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.562330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.562362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.562619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.562650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.562841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.562872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.563031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.563062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.563262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.563292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.563437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.563475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.563701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.563731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.563989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.564021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.564301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.564345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.564500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.564530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.564660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.638 [2024-07-14 10:44:52.564691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.638 qpair failed and we were unable to recover it. 00:36:07.638 [2024-07-14 10:44:52.564842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.639 [2024-07-14 10:44:52.564873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.639 qpair failed and we were unable to recover it. 00:36:07.639 [2024-07-14 10:44:52.565109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.639 [2024-07-14 10:44:52.565140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.639 qpair failed and we were unable to recover it. 00:36:07.639 [2024-07-14 10:44:52.565440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.639 [2024-07-14 10:44:52.565472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.639 qpair failed and we were unable to recover it. 00:36:07.639 [2024-07-14 10:44:52.565753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.639 [2024-07-14 10:44:52.565784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.639 qpair failed and we were unable to recover it. 00:36:07.639 [2024-07-14 10:44:52.566014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.639 [2024-07-14 10:44:52.566045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.639 qpair failed and we were unable to recover it. 00:36:07.639 [2024-07-14 10:44:52.566197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.639 [2024-07-14 10:44:52.566238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.639 qpair failed and we were unable to recover it. 00:36:07.639 [2024-07-14 10:44:52.566359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.639 [2024-07-14 10:44:52.566391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.639 qpair failed and we were unable to recover it. 00:36:07.639 [2024-07-14 10:44:52.566551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.639 [2024-07-14 10:44:52.566583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.639 qpair failed and we were unable to recover it. 00:36:07.639 [2024-07-14 10:44:52.566796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.639 [2024-07-14 10:44:52.566827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.639 qpair failed and we were unable to recover it. 00:36:07.639 [2024-07-14 10:44:52.567042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.639 [2024-07-14 10:44:52.567073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.639 qpair failed and we were unable to recover it. 00:36:07.639 [2024-07-14 10:44:52.567380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.639 [2024-07-14 10:44:52.567412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.639 qpair failed and we were unable to recover it. 00:36:07.639 [2024-07-14 10:44:52.567552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.639 [2024-07-14 10:44:52.567583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.639 qpair failed and we were unable to recover it. 00:36:07.639 [2024-07-14 10:44:52.567914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.639 [2024-07-14 10:44:52.567944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.639 qpair failed and we were unable to recover it. 00:36:07.639 [2024-07-14 10:44:52.568161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.639 [2024-07-14 10:44:52.568192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.639 qpair failed and we were unable to recover it. 00:36:07.639 [2024-07-14 10:44:52.568419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.639 [2024-07-14 10:44:52.568451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.639 qpair failed and we were unable to recover it. 00:36:07.639 [2024-07-14 10:44:52.568614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.639 [2024-07-14 10:44:52.568645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.639 qpair failed and we were unable to recover it. 00:36:07.639 [2024-07-14 10:44:52.568775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.639 [2024-07-14 10:44:52.568805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.639 qpair failed and we were unable to recover it. 00:36:07.639 [2024-07-14 10:44:52.569067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.639 [2024-07-14 10:44:52.569097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.639 qpair failed and we were unable to recover it. 00:36:07.639 [2024-07-14 10:44:52.569338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.639 [2024-07-14 10:44:52.569370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.639 qpair failed and we were unable to recover it. 00:36:07.639 [2024-07-14 10:44:52.569576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.639 [2024-07-14 10:44:52.569607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.639 qpair failed and we were unable to recover it. 00:36:07.639 [2024-07-14 10:44:52.569757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.639 [2024-07-14 10:44:52.569787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.639 qpair failed and we were unable to recover it. 00:36:07.639 [2024-07-14 10:44:52.570003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.639 [2024-07-14 10:44:52.570034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.639 qpair failed and we were unable to recover it. 00:36:07.639 [2024-07-14 10:44:52.570351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.639 [2024-07-14 10:44:52.570384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.639 qpair failed and we were unable to recover it. 00:36:07.639 [2024-07-14 10:44:52.570519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.639 [2024-07-14 10:44:52.570549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.639 qpair failed and we were unable to recover it. 00:36:07.639 [2024-07-14 10:44:52.570866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.639 [2024-07-14 10:44:52.570902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.639 qpair failed and we were unable to recover it. 00:36:07.639 [2024-07-14 10:44:52.571102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.639 [2024-07-14 10:44:52.571132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.639 qpair failed and we were unable to recover it. 00:36:07.916 [2024-07-14 10:44:52.571406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-07-14 10:44:52.571440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-07-14 10:44:52.571683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-07-14 10:44:52.571721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-07-14 10:44:52.571955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-07-14 10:44:52.571986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-07-14 10:44:52.572197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-07-14 10:44:52.572237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-07-14 10:44:52.572477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-07-14 10:44:52.572508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-07-14 10:44:52.572820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-07-14 10:44:52.572851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-07-14 10:44:52.573120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-07-14 10:44:52.573151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-07-14 10:44:52.573307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-07-14 10:44:52.573338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-07-14 10:44:52.573502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-07-14 10:44:52.573533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-07-14 10:44:52.573676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-07-14 10:44:52.573707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-07-14 10:44:52.573935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-07-14 10:44:52.573967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-07-14 10:44:52.574151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-07-14 10:44:52.574182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-07-14 10:44:52.574328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-07-14 10:44:52.574360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-07-14 10:44:52.574568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-07-14 10:44:52.574600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-07-14 10:44:52.574813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-07-14 10:44:52.574844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-07-14 10:44:52.575120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-07-14 10:44:52.575150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-07-14 10:44:52.575442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-07-14 10:44:52.575475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-07-14 10:44:52.575685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-07-14 10:44:52.575716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-07-14 10:44:52.576060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-07-14 10:44:52.576091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-07-14 10:44:52.576344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-07-14 10:44:52.576375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-07-14 10:44:52.576537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-07-14 10:44:52.576568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-07-14 10:44:52.576686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-07-14 10:44:52.576717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.576919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.576951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.577075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.577105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.577291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.577324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.577486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.577523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.577733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.577764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.578033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.578064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.578266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.578297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.578447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.578478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.578713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.578744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.578899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.578929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.579205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.579247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.579392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.579423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.579628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.579660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.579846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.579876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.580206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.580247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.580401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.580432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.580594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.580625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.580844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.580875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.581082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.581112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.581257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.581290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.581551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.581583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.581777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.581808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.582119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.582150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.582366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.582398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.582547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.582578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.582814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.582845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.583051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.583081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.583313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.583344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.583559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.583590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.583802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.583833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.584054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.584085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.584380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.584413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.584557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.584589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.584824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.584854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.585061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.585091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.585234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.585265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.585479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.585510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.585790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.585820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.586077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.586109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.586396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.586428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-07-14 10:44:52.586662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-07-14 10:44:52.586694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.586926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.586956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.587154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.587184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.587421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.587453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.587625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.587657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.587961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.587991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.588176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.588207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.588437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.588469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.588636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.588667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.588795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.588826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.589028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.589059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.589285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.589317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.589466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.589497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.589754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.589784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.589919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.589950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.590223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.590267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.590482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.590513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.590724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.590755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.591078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.591108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.591301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.591333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.591550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.591581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.591795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.591826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.591984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.592016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.592170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.592200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.592517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.592549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.592760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.592792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.592974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.593005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.593204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.593244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.593461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.593494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.593653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.593686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.593929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.593960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.594244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.594281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.594440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.594471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.594630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.594661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.594852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.594883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.595103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.595134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.595345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.595377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.595540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.595570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.595728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.595758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.596033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.596063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.596276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.596309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.596518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-07-14 10:44:52.596549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-07-14 10:44:52.596674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.596705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.596943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.596973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.597277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.597309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.597586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.597617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.597777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.597807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.597995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.598026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.598221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.598275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.598432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.598462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.598663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.598694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.598843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.598874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.599070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.599100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.599335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.599367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.599488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.599518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.599650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.599681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.599994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.600025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.600209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.600250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.600467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.600503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.600714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.600745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.600967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.600998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.601281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.601313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.601478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.601508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.601719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.601749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.601974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.602005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.602272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.602303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.602509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.602539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.602659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.602690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.602910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.602942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.603151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.603182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.603378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.603410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.603623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.603654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.603959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.603991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.604283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.604315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.604479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.604513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.604775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.604805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.605113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.605144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.605425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.605458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.605676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.605706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.605927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.605957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.606147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.606177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.606382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.606413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.606552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-07-14 10:44:52.606583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-07-14 10:44:52.606814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.606845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.607058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.607089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.607342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.607374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.607577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.607608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.607934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.607965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.608265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.608298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.608530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.608560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.608722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.608753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.609022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.609053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.609332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.609364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.609624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.609655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.609785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.609815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.610111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.610142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.610437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.610469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.610749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.610779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.610996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.611027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.611243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.611276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.611474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.611505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.611741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.611770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.612076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.612106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.612366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.612398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.612721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.612752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.613018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.613048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.613365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.613396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.613662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.613693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.613855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.613886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.614084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.614115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.614393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.614426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.614705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.614735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.615073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.615104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.615448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.615480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.615760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.615791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.616086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.616117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.616409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.616441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-07-14 10:44:52.616725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-07-14 10:44:52.616756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.617034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.617064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.617266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.617298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.617515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.617545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.617823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.617854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.618042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.618072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.618282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.618314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.618473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.618503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.618722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.618753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.618950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.618985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.619174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.619205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.619428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.619459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.619737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.619768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.619973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.620004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.620250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.620282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.620495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.620526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.620683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.620713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.620863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.620894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.621102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.621133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.621360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.621391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.621650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.621681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.622046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.622077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.622289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.622322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.622611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.622641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.622849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.622879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.623065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.623096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.623361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.623392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.623580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.623611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.623809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.623839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.624119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.624149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.624385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.624417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.624634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.624665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.624868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.624898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.625175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.625206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.625415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.625447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.625710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.625741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.625996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.626032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.626293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.626325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.626583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.626614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.626747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.626777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.627000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.627030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-07-14 10:44:52.627359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-07-14 10:44:52.627391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.627605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.627635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.627830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.627861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.628121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.628151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.628438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.628469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.628754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.628785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.629078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.629108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.629312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.629343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.629493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.629524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.629737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.629768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.630079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.630110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.630323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.630355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.630564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.630595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.630726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.630756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.631047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.631078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.631286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.631317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.631606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.631636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.631793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.631824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.632112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.632143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.632357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.632389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.632673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.632704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.632998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.633029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.633316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.633353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.633642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.633672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.633970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.634000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.634198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.634236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.634502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.634533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.634827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.634856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.635139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.635169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.635384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.635417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.635706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.635737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.636023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.636053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.636339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.636371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.636667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.636697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.636994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.637024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.637311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.637342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.637638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.637669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.637884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.637914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.638108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.638138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.638396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.638428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.638639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.638671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-07-14 10:44:52.638860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-07-14 10:44:52.638890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.639083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.639114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.639251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.639283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.639486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.639516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.639723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.639753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.640035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.640065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.640371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.640403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.640706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.640737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.641032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.641062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.641349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.641381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.641671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.641703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.641938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.641968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.642243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.642278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.642572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.642603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.642906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.642936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.643123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.643153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.643369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.643400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.643613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.643644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.643924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.643954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.644111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.644141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.644426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.644458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.644726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.644755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.644998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.645029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.645286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.645318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.645619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.645650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.645925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.645956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.646103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.646133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.646405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.646437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.646693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.646724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.647037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.647068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.647298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.647330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.647588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.647619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.647809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.647839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.648051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.648082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.648339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.648371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.648633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.648664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.648976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.649006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.649259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.649292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.649583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.649613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.649750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.649779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.649964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.649995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.650287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-07-14 10:44:52.650318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-07-14 10:44:52.650515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.650546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.650754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.650784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.650989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.651020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.651342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.651373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.651653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.651683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.651941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.651971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.652255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.652286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.652546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.652582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.652881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.652912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.653120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.653151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.653337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.653369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.653653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.653683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.653949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.653980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.654118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.654148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.654423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.654454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.654609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.654640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.654856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.654886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.655192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.655223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.655514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.655544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.655837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.655868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.656103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.656135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.656448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.656480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.656743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.656775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.656982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.657012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.657293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.657325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.657533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.657563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.657847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.657878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.658164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.658194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.658419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.658450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.658710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.658741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.659046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.659076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.659348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.659380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.659634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.659664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.659950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.659981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.660275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.660312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.660525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.660554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.660814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.660845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.661044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.661074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.661263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-07-14 10:44:52.661295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-07-14 10:44:52.661578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.661608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.661724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.661754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.661979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.662009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.662290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.662321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.662555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.662586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.662918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.662949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.663236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.663268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.663528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.663561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.663804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.663834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.664155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.664187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.664410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.664442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.664583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.664613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.664907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.664937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.665145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.665176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.665462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.665494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.665783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.665814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.666010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.666040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.666314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.666346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.666642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.666673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.666900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.666931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.667245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.667277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.667539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.667570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.667797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.667828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.668142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.668172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.668466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.668498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.668781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.668812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.669104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.669134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.669391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.669423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.669654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.669684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.669924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.669955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.670146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.670177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.670376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.670408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.670684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.670715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.670990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.671020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.671248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.671281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.671472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.671503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.671749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.671780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-07-14 10:44:52.672001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-07-14 10:44:52.672031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.672289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.672321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.672584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.672614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.672877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.672907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.673161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.673191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.673509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.673541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.673748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.673778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.674064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.674095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.674399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.674430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.674708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.674739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.674939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.674969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.675160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.675190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.675389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.675420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.675713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.675743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.676002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.676033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.676282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.676314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.676503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.676534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.676842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.676873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.677145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.677176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.677473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.677505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.677786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.677817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.678105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.678136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.678422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.678454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.678603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.678633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.678907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.678937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.679135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.679165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.679450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.679487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.679779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.679810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.680089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.680119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.680322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.680354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.680613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.680644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.680958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.680989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.681274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.681306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.681592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.681623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.681912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.681943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.682236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.682267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.682488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.682518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.682804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.682834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.683128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.683159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.683374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.683406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.683694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.683724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.683931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-07-14 10:44:52.683961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-07-14 10:44:52.684200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.684254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.684549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.684580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.684890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.684920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.685192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.685223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.685525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.685556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.685831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.685862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.686095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.686126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.686391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.686423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.686654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.686685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.686899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.686929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.687134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.687165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.687440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.687478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.687683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.687714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.688023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.688054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.688333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.688365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.688571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.688602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.688807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.688837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.689122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.689153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.689353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.689385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.689517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.689548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.689832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.689862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.690096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.690126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.690352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.690384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.690666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.690697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.690910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.690940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.691241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.691274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.691469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.691499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.691754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.691784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.691974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.692005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.692290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.692321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.692519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.692551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.692858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.692888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.693077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.693107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.693314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.693346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.693621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.693652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.693865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.693896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.694205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.694247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.694484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.694514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.694835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.694871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.695090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.695120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.695409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-07-14 10:44:52.695441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-07-14 10:44:52.695728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.695759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.696046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.696076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.696274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.696306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.696559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.696589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.696795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.696827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.697033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.697063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.697346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.697378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.697585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.697616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.697878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.697908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.698045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.698075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.698206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.698255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.698544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.698575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.698769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.698799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.699069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.699100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.699405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.699437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.699713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.699743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.699982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.700013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.700269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.700300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.700509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.700540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.700843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.700873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.701160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.701191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.701419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.701451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.701751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.701782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.701944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.701974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.702246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.702278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.702488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.702518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.702785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.702815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.703100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.703131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.703424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.703457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.703671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.703702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.703974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.704004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.704269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.704302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.704512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.704542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.704742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.704773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.704958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.704988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.705179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.705208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.705443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.705475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.705680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.705710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.705910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.705941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.706126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.706157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.706423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.706455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-07-14 10:44:52.706653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-07-14 10:44:52.706684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.706968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.706998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.707237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.707268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.707544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.707575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.707853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.707883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.708174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.708204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.708496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.708527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.708652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.708682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.708941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.708972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.709238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.709270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.709528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.709559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.709840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.709871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.710060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.710090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.710352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.710383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.710639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.710671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.710896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.710928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.711183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.711213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.711530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.711561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.711780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.711811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.712083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.712114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.712408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.712439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.712722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.712753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.712946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.712976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.713188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.713218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.713483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.713518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.713730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.713760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.714028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.714058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.714302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.714334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.714588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.714619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.714926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.714957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.715269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.715301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.715582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.715612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.715839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.715869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.716127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.716157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.716407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.716439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.716652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.716682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-07-14 10:44:52.716892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-07-14 10:44:52.716923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.717236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.717268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.717536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.717567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.717786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.717817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.718078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.718109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.718265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.718297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.718507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.718537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.718831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.718861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.719145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.719175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.719465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.719496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.719720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.719751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.720043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.720073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.720357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.720389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.720669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.720699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.720990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.721020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.721283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.721320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.721521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.721551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.721822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.721853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.722156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.722187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.722469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.722501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.722707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.722738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.723005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.723036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.723247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.723280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.723521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.723552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.723836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.723866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.724157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.724187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.724471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.724504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.724791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.724821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.725114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.725145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.725436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.725468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.725724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.725754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.726074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.726105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.726366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.726397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.726651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.726681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.726885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.726915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.727195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.727234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.727494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.727525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.727710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.727740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.728027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.728057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.728350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.728382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.728536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.728566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.728822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-07-14 10:44:52.728852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-07-14 10:44:52.729054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.729084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.729349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.729382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.729569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.729599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.729902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.729932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.730206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.730245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.730455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.730486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.730692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.730722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.730980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.731010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.731311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.731342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.731621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.731651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.731943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.731973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.732262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.732294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.732582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.732612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.732904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.732935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.733237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.733270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.733553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.733584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.733788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.733818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.734080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.734111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.734333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.734365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.734640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.734671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.734867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.734897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.735103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.735134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.735426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.735457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.735760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.735790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.736067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.736098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.736391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.736423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.736659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.736690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.736969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.736999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.737291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.737322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.737611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.737641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.737865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.737896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.738169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.738199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.738492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.738524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.738814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.738845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.739035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.739064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.739345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.739377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.739650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.739680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.739951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.739982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.740278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.740309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.740535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.740566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.740873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-07-14 10:44:52.740904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-07-14 10:44:52.741202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.741247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.741515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.741546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.741829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.741860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.742056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.742086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.742341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.742374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.742575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.742605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.742910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.742940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.743163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.743193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.743468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.743499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.743795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.743826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.744053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.744084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.744369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.744401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.744693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.744724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.744909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.744938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.745234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.745267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.745586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.745616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.745846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.745877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.746134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.746164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.746394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.746425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.746700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.746731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.746938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.746969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.747111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.747142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.747344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.747375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.747648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.747678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.747903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.747933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.748211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.748253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.748546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.748577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.748850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.748889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.749135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.749166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.749451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.749484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.749813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.749844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.750056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.750087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.750401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.750432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.750705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.750735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.751003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.751034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.751309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.751341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.751607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.751638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.751938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.751968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.752169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.752199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.752405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.752437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.752640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.752671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-07-14 10:44:52.752959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.932 [2024-07-14 10:44:52.752990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.753199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.753238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.753499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.753530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.753838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.753869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.754123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.754153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.754447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.754480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.754678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.754708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.755018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.755048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.755318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.755350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.755604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.755635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.755878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.755908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.756121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.756151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.756432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.756463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.756688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.756724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.756981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.757012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.757244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.757276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.757461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.757492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.757719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.757749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.758059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.758090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.758322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.758355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.758639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.758669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.758931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.758962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.759269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.759300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.759572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.759603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.759902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.759932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.760193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.760223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.760524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.760556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.760838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.760869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.761155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.761186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.761478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.761510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.761787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.761818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.762046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.762077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.762337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.762368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.762626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.762657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.762939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.762970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.763272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.763304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-07-14 10:44:52.763538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.933 [2024-07-14 10:44:52.763571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.763846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.763877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.764161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.764191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.764457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.764511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.764803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.764835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.765042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.765074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.765386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.765418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.765700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.765731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.765934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.765965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.766246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.766277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.766472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.766502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.766661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.766692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.766883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.766913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.767179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.767209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.767474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.767505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.767766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.767796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.767998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.768029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.768237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.768269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.768489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.768520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.768789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.768820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.769104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.769135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.769337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.769370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.769655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.769686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.769948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.769978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.770243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.770275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.770468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.770499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.770753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.770783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.771038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.771069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.771349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.771380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.771644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.771675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.771904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.771935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.772218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.772262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.772449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.772480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.772744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.772775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.773059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.773089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.773374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.773406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.773697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.773729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.773952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.773982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.774268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.774299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.774593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.774623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.774909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.934 [2024-07-14 10:44:52.774938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.934 qpair failed and we were unable to recover it. 00:36:07.934 [2024-07-14 10:44:52.775200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.775237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.775460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.775491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.775631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.775661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.775949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.775980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.776270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.776302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.776557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.776588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.776794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.776825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.777106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.777136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.777402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.777434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.777663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.777693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.777880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.777911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.778188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.778219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.778544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.778575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.778851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.778881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.779072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.779103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.779386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.779417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.779648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.779678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.779958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.779989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.780127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.780157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.780440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.780472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.780759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.780790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.781072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.781102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.781398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.781430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.781715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.781745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.782033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.782064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.782271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.782303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.782558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.782588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.782860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.782890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.783104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.783134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.783372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.783404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.783542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.783577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.783769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.783800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.784058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.784089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.784293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.784324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.784600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.784631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.784933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.784963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.785263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.785295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.785501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.785532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.785789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.785819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.786125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.786156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.935 [2024-07-14 10:44:52.786431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.935 [2024-07-14 10:44:52.786463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.935 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.786650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.786680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.786935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.786965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.787250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.787281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.787596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.787627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.787889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.787919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.788223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.788263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.788531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.788561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.788864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.788895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.789090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.789120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.789317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.789349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.789549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.789579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.789812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.789843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.790042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.790071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.790350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.790382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.790666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.790696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.790986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.791017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.791305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.791336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.791539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.791569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.791851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.791881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.792086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.792117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.792350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.792381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.792655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.792685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.792886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.792917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.793205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.793260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.793529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.793560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.793840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.793871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.794062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.794091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.794297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.794329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.794605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.794636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.794852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.794887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.795148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.795179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.795412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.795444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.795694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.795724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.795978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.796007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.796327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.796359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.796569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.796600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.796803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.796834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.797149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.797179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.797387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.797418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.797622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.797653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.936 qpair failed and we were unable to recover it. 00:36:07.936 [2024-07-14 10:44:52.797910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.936 [2024-07-14 10:44:52.797939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.798249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.798281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.798556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.798587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.798901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.798932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.799201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.799249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.799472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.799502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.799699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.799729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.799990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.800022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.800214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.800252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.800514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.800545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.800820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.800850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.801084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.801114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.801395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.801426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.801637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.801666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.801854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.801884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.802075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.802105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.802411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.802443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.802647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.802677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.802954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.802984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.803191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.803222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.803515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.803545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.803866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.803896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.804158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.804188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.804411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.804442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.804700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.804730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.805035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.805065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.805343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.805375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.805638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.805668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.805980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.806010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.806212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.806259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.806545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.806575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.806850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.806880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.807085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.807115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.807392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.807425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.807662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.807692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.807891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.937 [2024-07-14 10:44:52.807922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.937 qpair failed and we were unable to recover it. 00:36:07.937 [2024-07-14 10:44:52.808181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.808211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.808506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.808537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.808824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.808854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.809065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.809095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.809353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.809385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.809619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.809649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.809903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.809934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.810245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.810277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.810575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.810605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.810889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.810920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.811207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.811248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.811511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.811542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.811814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.811844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.812103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.812134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.812439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.812472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.812745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.812775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.813080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.813110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.813385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.813417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.813709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.813739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.813934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.813965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.814275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.814307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.814612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.814642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.814919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.814949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.815247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.815278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.815560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.815590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.815778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.815807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.816067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.816096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.816388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.816420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.816703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.816734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.817020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.817050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.817344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.817375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.817499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.817530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.817784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.817814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.818014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.818054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.818248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.818278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.818414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.818445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.818704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.818734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.818993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.819023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.819335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.819367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.819574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.819606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.819810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.938 [2024-07-14 10:44:52.819840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.938 qpair failed and we were unable to recover it. 00:36:07.938 [2024-07-14 10:44:52.820043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.820073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.820331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.820362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.820633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.820664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.820876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.820907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.821189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.821219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.821518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.821549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.821831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.821862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.822157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.822187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.822473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.822504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.822742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.822772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.822982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.823013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.823219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.823258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.823461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.823491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.823771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.823802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.824023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.824053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.824247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.824278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.824503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.824534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.824815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.824845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.825109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.825140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.825448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.825480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.825750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.825781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.826080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.826110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.826397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.826429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.826716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.826746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.827034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.827065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.827355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.827387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.827672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.827703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.827996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.828027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.828285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.828318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.828525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.828556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.828698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.828729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.829010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.829040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.829263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.829299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.829560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.829591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.829797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.829828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.830102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.830132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.830269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.830300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.830488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.830518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.830718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.830749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.831019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.831049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.831357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.939 [2024-07-14 10:44:52.831388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.939 qpair failed and we were unable to recover it. 00:36:07.939 [2024-07-14 10:44:52.831675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.831705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.831911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.831942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.832162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.832192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.832482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.832513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.832799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.832830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.833122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.833154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.833410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.833441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.833746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.833777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.834050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.834080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.834342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.834373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.834673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.834703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.834986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.835017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.835222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.835261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.835524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.835554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.835742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.835773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.836054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.836084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.836361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.836393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.836684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.836715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.837002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.837033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.837316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.837347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.837557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.837588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.837802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.837833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.838017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.838047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.838331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.838363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.838657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.838687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.838971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.839002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.839188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.839218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.839494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.839524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.839816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.839846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.840080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.840112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.840302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.840333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.840542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.840578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.840835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.840865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.840994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.841024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.841309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.841340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.841601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.841632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.841889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.841920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.842236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.842269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.842529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.842559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.842792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.842822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.843031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.940 [2024-07-14 10:44:52.843061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.940 qpair failed and we were unable to recover it. 00:36:07.940 [2024-07-14 10:44:52.843256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.843289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.843569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.843598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.843822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.843852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.844154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.844184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.844463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.844494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.844781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.844811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.845104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.845134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.845321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.845352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.845567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.845597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.845746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.845777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.845961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.845991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.846291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.846322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.846629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.846660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.846864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.846894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.847102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.847132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.847413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.847445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.847656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.847686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.847743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b2db60 (9): Bad file descriptor 00:36:07.941 [2024-07-14 10:44:52.848105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.848154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.848392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.848428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.848716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.848748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.848983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.849014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.849305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.849338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.849606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.849638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.849845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.849876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.850159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.850190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.850488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.850521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.850798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.850828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.851059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.851090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.851387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.851419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.851628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.851660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.851805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.851837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.852122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.852153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.852417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.852448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.852751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.852783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.853087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.853117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.853394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.853426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.853697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.853729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.853950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.853981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.854244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.854276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.854580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.854611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.941 [2024-07-14 10:44:52.854880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.941 [2024-07-14 10:44:52.854911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.941 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.855138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.855169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.855390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.855422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.855707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.855739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.856032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.856063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.856351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.856383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.856663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.856694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.856979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.857010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.857386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.857418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.857646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.857678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.857936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.857968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.858172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.858202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.858442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.858475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.858667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.858697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.858906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.858936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.859150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.859182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.859473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.859504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.859652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.859689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.859981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.860012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.860148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.860179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.860374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.860406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.860696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.860727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.861007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.861038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.861266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.861298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.861593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.861624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.861904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.861935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.862076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.862106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.862408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.862440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.862701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.862731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.863035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.863065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.863380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.863413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.863702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.863736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.864023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.864054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.864263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.864296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.864593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.864624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.864775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.942 [2024-07-14 10:44:52.864805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.942 qpair failed and we were unable to recover it. 00:36:07.942 [2024-07-14 10:44:52.865018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.865049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.865276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.865309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.865596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.865626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.865915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.865946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.866176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.866206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.866488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.866518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.866724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.866754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.867032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.867063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.867271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.867309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.867577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.867607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.867864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.867895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.868178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.868209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.868411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.868443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.868633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.868665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.868919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.868949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.869248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.869281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.869518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.869548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.869814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.869845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.870118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.870149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.870309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.870342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.870533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.870565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.870797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.870828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.871117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.871147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.871406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.871439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.871753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.871784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.872044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.872075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.872277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.872309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.872543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.872574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.872829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.872859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.873130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.873161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.873470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.873503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.873814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.873846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.874126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.874157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.874369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.874402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.874642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.874673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.874880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.874911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.875241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.875274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.875489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.875520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.875781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.875812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.876035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.876066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.876326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.876359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.943 [2024-07-14 10:44:52.876610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.943 [2024-07-14 10:44:52.876642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.943 qpair failed and we were unable to recover it. 00:36:07.944 [2024-07-14 10:44:52.876963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.944 [2024-07-14 10:44:52.876994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.944 qpair failed and we were unable to recover it. 00:36:07.944 [2024-07-14 10:44:52.877317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.944 [2024-07-14 10:44:52.877348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.944 qpair failed and we were unable to recover it. 00:36:07.944 [2024-07-14 10:44:52.877608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.944 [2024-07-14 10:44:52.877640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.944 qpair failed and we were unable to recover it. 00:36:07.944 [2024-07-14 10:44:52.877799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.944 [2024-07-14 10:44:52.877830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.944 qpair failed and we were unable to recover it. 00:36:07.944 [2024-07-14 10:44:52.878063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.944 [2024-07-14 10:44:52.878094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.944 qpair failed and we were unable to recover it. 00:36:07.944 [2024-07-14 10:44:52.878299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.944 [2024-07-14 10:44:52.878331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.944 qpair failed and we were unable to recover it. 00:36:07.944 [2024-07-14 10:44:52.878639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.944 [2024-07-14 10:44:52.878671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:07.944 qpair failed and we were unable to recover it. 00:36:08.222 [2024-07-14 10:44:52.879046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.222 [2024-07-14 10:44:52.879120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.222 qpair failed and we were unable to recover it. 00:36:08.222 [2024-07-14 10:44:52.879370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.222 [2024-07-14 10:44:52.879408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.222 qpair failed and we were unable to recover it. 00:36:08.222 [2024-07-14 10:44:52.879710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.222 [2024-07-14 10:44:52.879742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.222 qpair failed and we were unable to recover it. 00:36:08.222 [2024-07-14 10:44:52.879989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.222 [2024-07-14 10:44:52.880019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.222 qpair failed and we were unable to recover it. 00:36:08.222 [2024-07-14 10:44:52.880288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.222 [2024-07-14 10:44:52.880319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.222 qpair failed and we were unable to recover it. 00:36:08.222 [2024-07-14 10:44:52.880582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.222 [2024-07-14 10:44:52.880613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.222 qpair failed and we were unable to recover it. 00:36:08.222 [2024-07-14 10:44:52.880774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.222 [2024-07-14 10:44:52.880804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.222 qpair failed and we were unable to recover it. 00:36:08.222 [2024-07-14 10:44:52.881005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.222 [2024-07-14 10:44:52.881035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.222 qpair failed and we were unable to recover it. 00:36:08.222 [2024-07-14 10:44:52.881335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.222 [2024-07-14 10:44:52.881367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.222 qpair failed and we were unable to recover it. 00:36:08.222 [2024-07-14 10:44:52.881571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.222 [2024-07-14 10:44:52.881601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.222 qpair failed and we were unable to recover it. 00:36:08.222 [2024-07-14 10:44:52.881860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.222 [2024-07-14 10:44:52.881891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.222 qpair failed and we were unable to recover it. 00:36:08.222 [2024-07-14 10:44:52.882041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.222 [2024-07-14 10:44:52.882072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.222 qpair failed and we were unable to recover it. 00:36:08.222 [2024-07-14 10:44:52.882312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.222 [2024-07-14 10:44:52.882344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.222 qpair failed and we were unable to recover it. 00:36:08.222 [2024-07-14 10:44:52.882629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.222 [2024-07-14 10:44:52.882669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.222 qpair failed and we were unable to recover it. 00:36:08.222 [2024-07-14 10:44:52.882961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.222 [2024-07-14 10:44:52.882991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.222 qpair failed and we were unable to recover it. 00:36:08.222 [2024-07-14 10:44:52.883243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.222 [2024-07-14 10:44:52.883274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.222 qpair failed and we were unable to recover it. 00:36:08.222 [2024-07-14 10:44:52.883408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.222 [2024-07-14 10:44:52.883438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.222 qpair failed and we were unable to recover it. 00:36:08.222 [2024-07-14 10:44:52.883657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.222 [2024-07-14 10:44:52.883688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.222 qpair failed and we were unable to recover it. 00:36:08.222 [2024-07-14 10:44:52.883873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.222 [2024-07-14 10:44:52.883903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.222 qpair failed and we were unable to recover it. 00:36:08.222 [2024-07-14 10:44:52.884179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.222 [2024-07-14 10:44:52.884210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.222 qpair failed and we were unable to recover it. 00:36:08.222 [2024-07-14 10:44:52.884495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.222 [2024-07-14 10:44:52.884526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.222 qpair failed and we were unable to recover it. 00:36:08.222 [2024-07-14 10:44:52.884744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.222 [2024-07-14 10:44:52.884774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.222 qpair failed and we were unable to recover it. 00:36:08.222 [2024-07-14 10:44:52.885032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.222 [2024-07-14 10:44:52.885062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.222 qpair failed and we were unable to recover it. 00:36:08.222 [2024-07-14 10:44:52.885332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.222 [2024-07-14 10:44:52.885364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.222 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.885665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.885695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.885971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.886000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.886214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.886256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.886550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.886581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.886859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.886890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.887178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.887208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.887451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.887482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.887694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.887724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.887986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.888017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.888139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.888169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.888458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.888490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.888676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.888707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.888976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.889006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.889262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.889294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.889509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.889540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.889806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.889836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.890063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.890094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.890406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.890438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.890704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.890734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.890936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.890967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.891243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.891274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.891556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.891586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.891712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.891742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.892056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.892086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.892345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.892376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.892689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.892720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.892940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.892970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.893263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.893295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.893512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.893543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.893769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.893805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.894012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.894041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.894305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.894337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.894624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.894655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.894861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.894891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.895146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.895177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.895448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.895480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.895714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.895744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.896001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.896030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.896304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.896335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.896546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.223 [2024-07-14 10:44:52.896577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.223 qpair failed and we were unable to recover it. 00:36:08.223 [2024-07-14 10:44:52.896785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.896815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.897125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.897156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.897427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.897457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.897749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.897779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.898001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.898031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.898222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.898264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.898537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.898567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.898836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.898866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.899167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.899197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.899510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.899542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.899812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.899842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.900117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.900147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.900418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.900449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.900711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.900742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.901047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.901077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.901292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.901324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.901618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.901650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.901956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.901986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.902262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.902294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.902584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.902614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.902827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.902858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.903143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.903173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.903315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.903346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.903626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.903655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.903851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.903881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.904149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.904179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.904442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.904474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.904691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.904721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.904996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.905027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.905168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.905198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.905516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.905547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.905839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.905869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.906158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.906189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.906482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.906513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.906797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.906828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.907090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.907120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.907327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.907359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.907568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.907598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.907904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.907933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.908208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.908247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.908461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.224 [2024-07-14 10:44:52.908491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.224 qpair failed and we were unable to recover it. 00:36:08.224 [2024-07-14 10:44:52.908768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.908797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.909079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.909110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.909330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.909360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.909642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.909671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.909917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.909947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.910090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.910120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.910330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.910361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.910565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.910594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.910858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.910888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.911161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.911191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.911486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.911518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.911729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.911759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.911905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.911935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.912191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.912222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.912450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.912481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.912786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.912823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.913036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.913066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.913289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.913321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.913601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.913631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.913836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.913866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.914081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.914112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.914401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.914432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.914721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.914750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.915011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.915042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.915342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.915373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.915655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.915684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.915977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.916007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.916296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.916327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.916612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.916643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.916957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.916986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.917252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.917284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.917586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.917618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.917918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.917948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.918234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.918266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.918525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.918556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.918858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.918888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.919024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.919054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.919312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.919343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.919599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.919630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.919860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.919891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.225 qpair failed and we were unable to recover it. 00:36:08.225 [2024-07-14 10:44:52.920159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.225 [2024-07-14 10:44:52.920189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.920416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.920446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.920710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.920740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.921009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.921038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.921241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.921272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.921541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.921571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.921847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.921877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.922116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.922146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.922421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.922453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.922712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.922742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.923051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.923081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.923371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.923402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.923688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.923717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.924009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.924039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.924329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.924360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.924518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.924553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.924779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.924809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.925065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.925095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.925374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.925405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.925668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.925698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.925982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.926012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.926338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.926369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.926649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.926680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.926896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.926926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.927117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.927147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.927362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.927393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.927622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.927652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.927900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.927931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.928140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.928170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.928467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.928499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.928802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.928832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.929046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.929076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.929415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.929446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.929724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.226 [2024-07-14 10:44:52.929754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.226 qpair failed and we were unable to recover it. 00:36:08.226 [2024-07-14 10:44:52.930053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.930083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.930364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.930395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.930656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.930686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.930831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.930862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.931057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.931087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.931343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.931374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.931684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.931714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.931984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.932014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.932256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.932288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.932575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.932606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.932873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.932902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.933204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.933242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.933518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.933548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.933836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.933865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.934159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.934188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.934478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.934509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.934741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.934772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.935034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.935064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.935328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.935359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.935665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.935695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.935970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.936001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.936294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.936331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.936564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.936594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.936793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.936823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.937023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.937053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.937310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.937340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.937532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.937562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.937769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.937799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.938013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.938043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.938265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.938296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.938586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.938616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.938900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.938930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.939187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.939216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.939427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.939458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.939645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.939675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.939939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.939970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.940247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.940278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.940531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.940561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.940867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.940898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.941204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.941245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.941528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.227 [2024-07-14 10:44:52.941558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.227 qpair failed and we were unable to recover it. 00:36:08.227 [2024-07-14 10:44:52.941783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.941812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.941960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.941990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.942294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.942325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.942475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.942505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.942789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.942818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.943129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.943159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.943430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.943461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.943586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.943617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.943812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.943842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.944122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.944151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.944444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.944476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.944731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.944761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.945020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.945050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.945368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.945399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.945661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.945691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.946006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.946036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.946288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.946319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.946596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.946626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.946832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.946863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.947150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.947180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2629246 Killed "${NVMF_APP[@]}" "$@" 00:36:08.228 [2024-07-14 10:44:52.947403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.947435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.947694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.947724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 10:44:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:36:08.228 [2024-07-14 10:44:52.948033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.948064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.948268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.948299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 10:44:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:08.228 [2024-07-14 10:44:52.948510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.948541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 10:44:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:08.228 [2024-07-14 10:44:52.948819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.948851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 10:44:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:08.228 [2024-07-14 10:44:52.949108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.949138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 10:44:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:08.228 [2024-07-14 10:44:52.949402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.949435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.949745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.949775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.950043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.950073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.950382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.950414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.950657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.950688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.950963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.950993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.951174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.951203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.951410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.951441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.951714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.951744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.952027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.952057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.952327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.952358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.228 [2024-07-14 10:44:52.952616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.228 [2024-07-14 10:44:52.952645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.228 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.952842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.952873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.953080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.953109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.953382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.953414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.953673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.953704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.954009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.954039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.954262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.954299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.954505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.954536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.954820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.954851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.955139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.955169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.955507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.955538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.955700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.955730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.956023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 10:44:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2630074 00:36:08.229 [2024-07-14 10:44:52.956053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.956310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.956342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 10:44:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2630074 00:36:08.229 [2024-07-14 10:44:52.956538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 10:44:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:08.229 [2024-07-14 10:44:52.956569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 10:44:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2630074 ']' 00:36:08.229 [2024-07-14 10:44:52.956853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.956884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 10:44:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:08.229 [2024-07-14 10:44:52.957143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.957173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 10:44:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:08.229 [2024-07-14 10:44:52.957480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.957511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 10:44:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:08.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:08.229 [2024-07-14 10:44:52.957821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.957852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 10:44:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:08.229 [2024-07-14 10:44:52.958162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 10:44:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:08.229 [2024-07-14 10:44:52.958193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.958483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.958516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.958706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.958735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.959020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.959050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.959325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.959357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.959626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.959656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.959954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.959988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.960278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.960310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.960592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.960623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.960904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.960933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.961139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.961170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.961400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.961433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.961659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.961690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.961950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.961981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.962287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.962319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.962620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.962652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.962931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.962962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.963221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.963269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.229 qpair failed and we were unable to recover it. 00:36:08.229 [2024-07-14 10:44:52.963460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.229 [2024-07-14 10:44:52.963493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.963641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.963672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.963901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.963933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.964248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.964281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.964544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.964575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.964845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.964876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.965069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.965100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.965373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.965405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.965629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.965661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.965864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.965894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.966154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.966186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.966510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.966542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.966824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.966855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.967047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.967082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.967348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.967379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.967571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.967602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.967803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.967834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.968028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.968059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.968265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.968307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.968603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.968637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.968854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.968885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.969095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.969126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.969325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.969357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.969556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.969588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.969803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.969836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.970139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.970173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.970451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.970485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.970721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.970751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.970951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.970981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.971246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.971277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.971535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.971565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.971826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.971856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.972163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.972194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.972485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.972516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.972777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.972807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.973089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.230 [2024-07-14 10:44:52.973120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.230 qpair failed and we were unable to recover it. 00:36:08.230 [2024-07-14 10:44:52.973319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.973351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.973639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.973671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.973961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.973992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.974271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.974302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.974590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.974620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.974842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.974873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.975133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.975164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.975457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.975489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.975623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.975653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.975941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.975971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.976191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.976221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.976514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.976544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.976803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.976836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.977116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.977148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.977448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.977481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.977628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.977658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.977846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.977875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.978166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.978197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.978479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.978511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.978810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.978840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.979125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.979156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.979450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.979481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.979681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.979716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.979913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.979944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.980170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.980200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.980429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.980459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.980654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.980685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.980950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.980980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.981292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.981323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.981578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.981608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.981893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.981923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.982122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.982152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.982428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.982460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.982747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.982777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.982923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.982953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.983244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.983275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.983562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.983593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.983791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.983821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.984103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.984133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.984337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.984368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.231 qpair failed and we were unable to recover it. 00:36:08.231 [2024-07-14 10:44:52.984560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.231 [2024-07-14 10:44:52.984590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.984876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.984906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.985195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.985247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.985433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.985464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.985669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.985700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.985927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.985957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.986164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.986195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.986353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.986385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.986647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.986678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.986947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.986978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.987110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.987139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.987425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.987456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.987640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.987670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.987953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.987984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.988114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.988145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.988408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.988438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.988630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.988661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.988917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.988946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.989178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.989207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.989499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.989530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.989655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.989686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.989942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.989971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.990161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.990197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.990450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.990526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.990754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.990790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.991010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.991042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.991198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.991247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.991385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.991417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.991570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.991600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.991797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.991827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.992125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.992155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.992377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.992408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.992548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.992579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.992837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.992868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.993068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.993100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.993314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.993347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.993566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.993598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.993735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.993766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.993894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.993925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.994185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.994216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.994370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.994402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.232 qpair failed and we were unable to recover it. 00:36:08.232 [2024-07-14 10:44:52.994614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-07-14 10:44:52.994644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:52.994847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:52.994877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:52.995034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:52.995064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:52.995181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:52.995211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:52.995378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:52.995408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:52.995551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:52.995582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:52.995703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:52.995734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:52.995876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:52.995906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:52.996113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:52.996144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:52.996275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:52.996306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:52.996567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:52.996598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:52.996811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:52.996842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:52.996963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:52.996993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:52.997203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:52.997247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:52.997513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:52.997543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:52.997743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:52.997774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:52.997966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:52.997996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:52.998322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:52.998355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:52.998474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:52.998504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:52.998787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:52.998817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:52.999021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:52.999053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:52.999209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:52.999262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:52.999545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:52.999617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:52.999811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:52.999875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:53.000165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:53.000200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:53.000347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:53.000378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:53.000569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:53.000599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:53.000819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:53.000851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:53.000991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:53.001020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:53.001277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:53.001309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:53.001471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:53.001501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:53.001707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:53.001738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:53.001972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:53.002003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:53.002215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:53.002259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:53.002531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:53.002562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:53.002750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:53.002788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:53.002930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:53.002961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:53.003245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:53.003276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:53.003555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:53.003585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:53.003772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:53.003803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:53.004031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:53.004061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.233 qpair failed and we were unable to recover it. 00:36:08.233 [2024-07-14 10:44:53.004260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-07-14 10:44:53.004291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.004424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.004455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.004661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.004691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.004893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.004922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.005063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.005093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.005372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.005404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.005597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.005627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.005837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.005867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.006013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.006043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.006252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.006283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.006479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.006509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.006642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.006671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.006792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.006823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.006964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.006993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.007186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.007216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.007368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.007397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.007531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.007562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.007769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.007798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.008078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.008107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.008301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.008333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.008468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.008497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.008721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.008764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.009123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.009155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.009305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.009338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.009568] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:36:08.234 [2024-07-14 10:44:53.009597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.009633] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:08.234 [2024-07-14 10:44:53.009636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.009903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.009933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.010137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.010166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.010388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.010422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.010548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.010580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.010799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.010830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.010967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.010999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.011190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.011222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.011410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.011443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.011658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.011699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.011909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.011941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.012172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.012203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.012376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.012408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.012669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.012701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.012902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.012935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.013167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-07-14 10:44:53.013198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.234 qpair failed and we were unable to recover it. 00:36:08.234 [2024-07-14 10:44:53.013426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.013459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.013614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.013646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.013914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.013945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.014184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.014217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.014507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.014538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.014846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.014877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.015006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.015036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.015320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.015352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.015559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.015589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.015739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.015769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.015969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.016000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.016191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.016222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.016406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.016436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.016584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.016615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.016819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.016850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.017046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.017076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.017260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.017292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.017482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.017512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.017730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.017763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.017962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.017994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.018249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.018282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.018431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.018463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.018657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.018688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.018839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.018870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.019082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.019112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.019306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.019339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.019548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.019578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.019730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.019761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.019963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.019994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.020179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.020210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.020407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.020438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.020620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.020651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.020868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.020899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.021084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.021119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.021261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.021293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.021514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.235 [2024-07-14 10:44:53.021545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.235 qpair failed and we were unable to recover it. 00:36:08.235 [2024-07-14 10:44:53.021730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.021762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.021966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.021997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.022205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.022245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.022450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.022481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.022618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.022648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.022837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.022868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.023000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.023030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.023184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.023214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.023428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.023460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.023574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.023604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.023831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.023861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.024125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.024156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.024447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.024479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.024614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.024645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.024888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.024919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.025175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.025205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.025467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.025498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.025661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.025691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.025950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.025980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.026237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.026269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.026499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.026531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.026736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.026766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.026973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.027004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.027167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.027197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.027404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.027435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.027717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.027747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.027941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.027973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.028249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.028280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.028485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.028516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.028631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.028662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.028868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.028898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.029168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.029201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.029425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.029457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.029747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.029778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.030033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.030063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.030268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.030300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.030575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.030606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.030729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.030765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.030910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.030941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.031123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.031154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.236 [2024-07-14 10:44:53.031350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.236 [2024-07-14 10:44:53.031381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.236 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.031635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.031665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.031916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.031946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.032219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.032261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.032414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.032444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.032646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.032676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.032894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.032925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.033130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.033162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.033298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.033329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.033458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.033490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.033677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.033708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.033856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.033887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.034118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.034149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.034291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.034323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.034440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.034470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.034671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.034702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.034891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.034922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.035191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.035222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.035366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.035398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.035595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.035626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.035755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.035785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.036024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.036055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.036264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.036295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.036550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.036583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.036776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.036808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.037067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.037098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.037290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.037323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.037442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.037472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.037632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.037663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.037864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.037894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.038074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.038105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.038311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.038343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.038475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.038505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.038628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.038659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.038848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.038878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.039156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.039189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.039447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.039479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.039733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.039769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.039951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.039981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.040136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.040166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 [2024-07-14 10:44:53.040311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.040342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.237 qpair failed and we were unable to recover it. 00:36:08.237 EAL: No free 2048 kB hugepages reported on node 1 00:36:08.237 [2024-07-14 10:44:53.040537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.237 [2024-07-14 10:44:53.040569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.040690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.040722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.040857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.040887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.041007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.041037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.041287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.041318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.041522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.041552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.041832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.041862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.042057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.042087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.042300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.042331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.042590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.042621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.042740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.042770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.043077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.043107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.043381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.043412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.043595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.043625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.043769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.043800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.043996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.044027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.044294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.044325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.044590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.044620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.044876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.044906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.045143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.045174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.045313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.045345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.045453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.045482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.045600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.045630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.045842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.045873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.046050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.046080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.046380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.046412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.046564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.046593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.046772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.046803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.047000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.047030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.047232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.047263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.047445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.047474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.047603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.047633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.047829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.047859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.048050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.048080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.048210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.048248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.048458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.048487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.048666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.048701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.048914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.048944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.049079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.049109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.049273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.049304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.049508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.238 [2024-07-14 10:44:53.049538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.238 qpair failed and we were unable to recover it. 00:36:08.238 [2024-07-14 10:44:53.049792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.049822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.049954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.049983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.050166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.050196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.050354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.050385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.050517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.050546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.050733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.050762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.050888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.050918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.051090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.051119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.051371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.051402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.051600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.051630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.051774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.051803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.051914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.051945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.052132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.052162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.052292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.052340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.052463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.052493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.052643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.052673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.052955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.052984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.053183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.053213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.053497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.053527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.053712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.053742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.053868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.053898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.054104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.054134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.054328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.054358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.054555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.054584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.054774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.054804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.054993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.055023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.055271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.055302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.055495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.055525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.055774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.055804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.056012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.056042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.056244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.056275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.056548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.056578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.056766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.056796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.056941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.056971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.057090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.057120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.239 [2024-07-14 10:44:53.057236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.239 [2024-07-14 10:44:53.057273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.239 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.057493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.057523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.057710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.057740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.058032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.058062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.058188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.058218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.058365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.058396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.058647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.058677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.058876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.058906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.059093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.059131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.059354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.059385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.059516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.059547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.059728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.059758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.059955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.059985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.060176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.060207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.060343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.060373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.060617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.060647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.060850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.060880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.061092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.061122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.061325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.061357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.061586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.061616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.061729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.061759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.061943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.061972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.062103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.062133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.062327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.062358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.062500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.062530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.062714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.062744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.063035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.063065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.063316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.063347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.063666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.063697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.063966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.063995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.064178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.064207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.064421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.064451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.064646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.064676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.064920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.064950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.065138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.065168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.065373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.065405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.065668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.065697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.065847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.065877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.066004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.066034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.066258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.066290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.066542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.066579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.240 [2024-07-14 10:44:53.066768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.240 [2024-07-14 10:44:53.066798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.240 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.066977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.067007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.067187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.067217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.067434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.067465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.067639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.067669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.067923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.067953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.068154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.068184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.068376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.068406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.068613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.068643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.068838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.068868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.069168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.069198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.069472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.069530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.069824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.069855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.070050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.070081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.070205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.070246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.070525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.070555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.070864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.070894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.071139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.071169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.071372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.071403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.071684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.071715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.071913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.071943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.072091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.072120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.072247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.072279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.072478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.072508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.072738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.072767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.073063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.073093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.073239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.073270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.073476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.073506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.073713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.073743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.073922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.073952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.074194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.074233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.074435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.074466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.074670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.074700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.074943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.074972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.075222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.075261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.075439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.075468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.075751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.075781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.076048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.076077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.076258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.076289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.076412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.076448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.076721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.076751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.241 [2024-07-14 10:44:53.077026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.241 [2024-07-14 10:44:53.077056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.241 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.077302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.077333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.077573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.077603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.077848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.077878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.078198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.078236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.078454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.078483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.078684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.078713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.078950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.078979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.079121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.079151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.079360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.079391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.079531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.079560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.079808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.079838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.079970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.080000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.080199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.080239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.080419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.080449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.080628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.080659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.080774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.080804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.081017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.081047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.081217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.081261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.081370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.081400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.081599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.081630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.081875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.081905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.082085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.082114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.082247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.082278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.082488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.082518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.082792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.082822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.083022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.083052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.083241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.083273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.083405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.083435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.083628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.083658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.083766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.083796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.083982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.084012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.084268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.084299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.084484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.084514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.084707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.084738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.084947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.084977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.085235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.085266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.085462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.085492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.085676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.085712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.085845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.085875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.086064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.242 [2024-07-14 10:44:53.086093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.242 qpair failed and we were unable to recover it. 00:36:08.242 [2024-07-14 10:44:53.086237] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:08.242 [2024-07-14 10:44:53.086342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.086373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.086493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.086523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.086710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.086740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.086855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.086885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.087127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.087157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.087350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.087380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.087653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.087684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.087870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.087900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.088184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.088214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.088470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.088501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.088684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.088720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.089001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.089032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.089266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.089298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.089437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.089466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.089593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.089623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.089743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.089773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.089955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.089985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.090178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.090208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.090468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.090499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.090765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.090795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.090990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.091020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.091212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.091252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.091447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.091478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.091744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.091774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.091966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.091997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.092275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.092307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.092428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.092458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.092659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.092689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.092822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.092852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.093034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.093065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.093196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.093233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.093490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.093522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.093766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.093797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.093983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.094014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.094208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.094257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.094542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.243 [2024-07-14 10:44:53.094575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.243 qpair failed and we were unable to recover it. 00:36:08.243 [2024-07-14 10:44:53.094776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.094807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.095024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.095065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.095188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.095219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.095506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.095538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.095743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.095775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.095995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.096026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.096223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.096266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.096411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.096443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.096626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.096657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.096802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.096833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.097132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.097162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.097343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.097374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.097571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.097601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.097791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.097822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.098020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.098056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.098277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.098309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.098489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.098518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.098636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.098666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.098787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.098817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.099088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.099117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.099309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.099340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.099558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.099588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.099713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.099742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.099961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.099991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.100243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.100274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.100486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.100516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.100630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.100659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.100840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.100870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.101122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.101152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.101339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.101371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.101643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.101674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.101810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.101841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.102040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.102070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.102339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.102371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.102555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.102585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.102764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.102795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.102951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.102981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.103248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.103279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.103470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.103501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.103620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.103649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.103851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.244 [2024-07-14 10:44:53.103882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.244 qpair failed and we were unable to recover it. 00:36:08.244 [2024-07-14 10:44:53.104088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.104143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.104352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.104386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.104568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.104598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.104730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.104761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.105030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.105061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.105252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.105283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.105460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.105490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.105680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.105711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.105891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.105924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.106169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.106202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.106339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.106373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.106592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.106625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.106758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.106790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.106973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.107004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.107123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.107155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.107360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.107392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.107578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.107611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.107882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.107914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.108033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.108064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.108259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.108292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.108467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.108498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.108794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.108826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.109051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.109083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.109213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.109253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.109365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.109395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.109641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.109671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.109885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.109917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.110093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.110130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.110276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.110309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.110488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.110519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.110655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.110686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.110937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.110968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.111156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.111187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.111443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.111476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.111617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.111648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.111844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.111874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.112117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.112150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.112286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.112318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.112457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.112488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.112705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.112736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.112934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.245 [2024-07-14 10:44:53.112966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-07-14 10:44:53.113182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.113214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.113414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.113446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.113642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.113673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.113958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.113988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.114120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.114150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.114392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.114425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.114613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.114644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.114890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.114921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.115106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.115137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.115405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.115437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.115614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.115645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.115910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.115940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.116115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.116145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.116258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.116296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.116566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.116596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.116863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.116894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.117183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.117212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.117494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.117525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.117651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.117680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.117869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.117899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.118045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.118075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.118252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.118283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.118472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.118502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.118763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.118793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.119049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.119079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.119266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.119297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.119489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.119519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.119739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.119770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.119943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.119973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.120152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.120182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.120304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.120335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.120604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.120635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.120815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.120846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.121111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.121141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.121328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.121360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.121504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.121534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.121736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.121766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.122057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.122087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.122239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.122270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.122558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.122588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.122800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.122837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-07-14 10:44:53.122978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.246 [2024-07-14 10:44:53.123008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.123112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.123142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.123269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.123300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.123564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.123595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.123789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.123819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.124014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.124045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.124245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.124278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.124530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.124562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.124816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.124847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.125122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.125151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.125285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.125316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.125577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.125608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.125806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.125837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.126065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.126095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.126287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.126319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.126533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.126564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.126833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.126864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.127109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.127140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.127252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.127283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.127482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.127512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.127707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.127737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.127772] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:08.247 [2024-07-14 10:44:53.127809] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:08.247 [2024-07-14 10:44:53.127816] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:08.247 [2024-07-14 10:44:53.127822] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:08.247 [2024-07-14 10:44:53.127827] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:08.247 [2024-07-14 10:44:53.127919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.127949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.127944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:36:08.247 [2024-07-14 10:44:53.128083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.128113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.128051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:36:08.247 [2024-07-14 10:44:53.128157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:36:08.247 [2024-07-14 10:44:53.128247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.128159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:36:08.247 [2024-07-14 10:44:53.128278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.128473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.128504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.128775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.128805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.128994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.129024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.129153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.129183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.129377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.129409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.129600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.129630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.129809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.129838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.129968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.129997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.130245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.130277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.130455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.130485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.130663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.130692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-07-14 10:44:53.130812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.247 [2024-07-14 10:44:53.130842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.131022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.131052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.131178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.131209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.131427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.131458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.131640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.131670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.131781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.131811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.131998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.132028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.132212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.132261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.132460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.132490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.132693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.132724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.132969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.133000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.133135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.133165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.133436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.133468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.133650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.133680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.133923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.133953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.134141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.134177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.134459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.134490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.134683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.134713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.134888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.134919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.135110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.135140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.135313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.135346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.135545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.135576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.135699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.135729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.135905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.135936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.136129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.136160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.136434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.136466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.136745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.136776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.137026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.137056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.137303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.137336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.137473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.137504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.137725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.137756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.137894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.137924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.138098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.138128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.138342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.138375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.138569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.138601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.138718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.138750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.139003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.139034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.139266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.139299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.139515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.139547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.139688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.248 [2024-07-14 10:44:53.139719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.248 qpair failed and we were unable to recover it. 00:36:08.248 [2024-07-14 10:44:53.139977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.140008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.140135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.140166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.140357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.140389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.140588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.140620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.140887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.140918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.141095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.141126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.141339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.141371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.141501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.141532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.141662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.141695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.141856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.141886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.142072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.142104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.142222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.142260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.142441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.142472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.142659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.142692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.142912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.142942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.143136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.143168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.143390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.143445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.143664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.143693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.143874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.143905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.144030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.144060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.144196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.144233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.144534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.144565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.144684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.144715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.144889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.144919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.145110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.145140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.145262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.145294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.145542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.145572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.145751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.145781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.145960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.145989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.146119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.146156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.146341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.146373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.146570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.146599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.146843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.146874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.147004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.147034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.147210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.147248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.147513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.147544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.147740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.147770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.148042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.148074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.148316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.148349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.148486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.148517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.148711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.249 [2024-07-14 10:44:53.148745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.249 qpair failed and we were unable to recover it. 00:36:08.249 [2024-07-14 10:44:53.148950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.148982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.149197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.149236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.149446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.149478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.149658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.149690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.149888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.149921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.150131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.150162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.150275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.150306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.150439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.150470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.150645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.150676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.150876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.150906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.151101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.151132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.151308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.151342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.151552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.151582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.151770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.151800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.152052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.152082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.152337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.152391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.152572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.152603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.152793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.152824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.152970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.152999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.153242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.153274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.153515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.153544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.153713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.153743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.153890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.153921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.154049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.154078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.154201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.154243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.154439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.154469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.154595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.154624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.154863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.154893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.155096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.155140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.155435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.155466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.155749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.155779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.156028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.156057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.156367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.156399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.156642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.156673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.156807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.156837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.156973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.157004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.157115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.157145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.157334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.157367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.157566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.157597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.157869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.157902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.158051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.250 [2024-07-14 10:44:53.158084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.250 qpair failed and we were unable to recover it. 00:36:08.250 [2024-07-14 10:44:53.158331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.158365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.158511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.158544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.158810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.158843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.159115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.159149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.159284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.159316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.159521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.159552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.159816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.159847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.160107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.160138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.160361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.160393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.160604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.160635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.160756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.160785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.160980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.161011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.161151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.161181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.161314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.161344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.161536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.161574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.161845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.161875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.162060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.162090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.162284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.162316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.162505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.162535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.162717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.162747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.163025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.163056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.163183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.163213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.163437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.163468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.163743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.163772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.163888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.163918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.164122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.164151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.164370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.164402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.164597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.164627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.164770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.164800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.165047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.165076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.165260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.165290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.165531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.165561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.165759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.165789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.165921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.165951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.166076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.166106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.166376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.166407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.166646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.166675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.166805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.166835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.167020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.167049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.167242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.167272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.167561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.251 [2024-07-14 10:44:53.167591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.251 qpair failed and we were unable to recover it. 00:36:08.251 [2024-07-14 10:44:53.167803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.167834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.168053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.168083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.168262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.168293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.168410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.168440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.168627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.168657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.168861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.168890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.169070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.169100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.169275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.169306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.169520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.169551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.169693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.169724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.169897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.169928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.170116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.170147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.170283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.170315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.170502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.170539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.170709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.170740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.170861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.170893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.171167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.171198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.171353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.171405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.171555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.171586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.171803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.171835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.172027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.172057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.172303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.172335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.172547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.172577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.172708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.172738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.172944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.172975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.173158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.173189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.173384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.173416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.173691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.173722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.173915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.173946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.174122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.174152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.174288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.174319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.174561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.174591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.174763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.174793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.174979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.175009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.175258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.175290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.252 [2024-07-14 10:44:53.175576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.252 [2024-07-14 10:44:53.175607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.252 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.175733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.175764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.175882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.175913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.176206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.176252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.176477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.176511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.176660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.176694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.176938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.176972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.177147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.177178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.177309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.177343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.177593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.177625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.177835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.177869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.178116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.178148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.178341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.178377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.178510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.178541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.178795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.178827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.179008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.179040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.179243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.179275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.179533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.179563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.179841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.179878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.180010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.180039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.180262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.180293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.180492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.180522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.180717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.180747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.180980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.181010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.181117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.181150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.181409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.181440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.181566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.181596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.181787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.181817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.182101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.182131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.182326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.182357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.182601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.182632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.182833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.182863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.183065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.183095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.183246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.183276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.183500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.183530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.183644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.183673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.183798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.253 [2024-07-14 10:44:53.183828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.253 qpair failed and we were unable to recover it. 00:36:08.253 [2024-07-14 10:44:53.184038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.184068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.184193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.184223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.184353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.184384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.184508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.184537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.184732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.184761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.184892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.184922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.185045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.185075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.185219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.185258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.185479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.185509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.185638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.185668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.185861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.185891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.186158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.186187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.186383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.186415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.186524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.186554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.186730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.186760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.186899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.186928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.187110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.187139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.187258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.187289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.187504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.187533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.187709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.187738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.187857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.187887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.188158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.188196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.188425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.188458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.188668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.188698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.188890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.188920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.189106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.189136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.189404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.189435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.189635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.189665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.189793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.189822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.190093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.190123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.190318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.190348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.190540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.190569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.190837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.190867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.191121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.191150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.191281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.191313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.191528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.191558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.191804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.191833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.192073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.192103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.192300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.192330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.192519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.192548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.192786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.192816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.192953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.192983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.193113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.193142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.193344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.193374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.193505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.193534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.193683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.193712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.193831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.193862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.193982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.194011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.194235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.194266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.194448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.194477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.194745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.194774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.194896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.194926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.195130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.195159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.195356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.195387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.195634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.195664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.195840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.195870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.196001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.196031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.196159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.196188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.196458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.196488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.196622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.196652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.196898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.196928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.197047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.197082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.197357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.197388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.197587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.197617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.197816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.197845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.198135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.198164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.198486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.198517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.198781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.198810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.199003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.199032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.199282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.199313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.199495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.199524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.199713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.199743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.199969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.199999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.200192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.200222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.200438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.200468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.200673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.200703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.200957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.200987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.201177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.201206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.201505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.201535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.201677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.201706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.201917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.201946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.202086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.522 [2024-07-14 10:44:53.202115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.522 qpair failed and we were unable to recover it. 00:36:08.522 [2024-07-14 10:44:53.202242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.202272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.202528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.202558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.202823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.202852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.203119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.203149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.203325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.203355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.203546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.203575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.203768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.203799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.204010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.204040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.204223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.204276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.204542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.204572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.204779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.204809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.205037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.205067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.205195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.205233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.205474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.205503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.205621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.205650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.205855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.205885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.206068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.206099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.206317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.206348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.206586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.206616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.206751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.206791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.207037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.207066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.207283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.207315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.207446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.207475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.207740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.207770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.207942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.207972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.208094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.208124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.208369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.208400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.208513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.208543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.208785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.208815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.209011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.209040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.209232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.209263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.209551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.209580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.209764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.209794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.210065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.210095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.210365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.210395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.210595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.210625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.210741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.210771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.210958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.210987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.211254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.211285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.211488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.211517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.211642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.211671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.211894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.211924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.212105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.212134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.212306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.212337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.212518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.212548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.212812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.212842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.213039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.213068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.213266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.213296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.213422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.213452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.213589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.213619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.213793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.213822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.214099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.214130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.214264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.214295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.214500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.214530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.214773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.214802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.214979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.215009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.215199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.215239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.215377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.215408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.215646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.215675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.215930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.215964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.216164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.216194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.216468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.216499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.216674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.216704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.216822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.216852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.216976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.217006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.217247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.217278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.217416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.217445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.523 [2024-07-14 10:44:53.217655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.523 [2024-07-14 10:44:53.217685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.523 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.217895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.217925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.218061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.218090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.218289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.218319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.218432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.218462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.218703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.218733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.218976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.219005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.219193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.219222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.219356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.219386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.219518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.219548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.219792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.219821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.219930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.219959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.220157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.220186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.220382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.220412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.220589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.220619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.220805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.220834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.220948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.220978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.221112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.221141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.221275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.221306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.221507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.221537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.221736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.221766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.221944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.221973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.222170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.222199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.222413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.222444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.222739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.222769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.222901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.222930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.223117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.223147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.223321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.223352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.223475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.223504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.223707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.223737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.223911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.223941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.224082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.224112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.224239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.224275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.224383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.524 [2024-07-14 10:44:53.224413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.524 qpair failed and we were unable to recover it. 00:36:08.524 [2024-07-14 10:44:53.224595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.224624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.224887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.224917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.225109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.225139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.225435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.225466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.225730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.225759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.225890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.225919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.226112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.226142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.226355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.226385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.226516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.226546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.226728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.226757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.226932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.226962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.227099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.227128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.227382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.227413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.227598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.227628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.227821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.227851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.228029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.228059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.228178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.228207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.228476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.228506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.228702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.228731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.228981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.229010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.229183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.229212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.229366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.229397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.229591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.229621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.229812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.229841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.230033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.230063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.230345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.230403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.230624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.230655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.230789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.230819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.230959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.230989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.231186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.231215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.231439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.231470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.231650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.231679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.231862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.231891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.232136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.232166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.232433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.232464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.232659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.232688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.232872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.232901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.233190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.233220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.233496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.233526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.233735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.233766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.234045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.525 [2024-07-14 10:44:53.234075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.525 qpair failed and we were unable to recover it. 00:36:08.525 [2024-07-14 10:44:53.234281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.234312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.234587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.234617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.234860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.234889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.235018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.235048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.235241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.235273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.235470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.235499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.235616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.235646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.235834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.235863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.235994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.236023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.236133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.236162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.236316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.236346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.236600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.236633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.236811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.236841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.236970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.237000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.237101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.237130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.237399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.237429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.237544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.237574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.237786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.237816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.237959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.237989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.238167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.238196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.238475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.238509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.238685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.238716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.238861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.238891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.239069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.239099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.239220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.239262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.239463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.239493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.239754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.239784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.240051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.240080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.240292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.240323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.240492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.240522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.240726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.240756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.241001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.241030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.241297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.241327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.241593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.241623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.241812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.241842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.242034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.242064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.242340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.242370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.242518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.242548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.242679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.242713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.242832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.242861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.243058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.526 [2024-07-14 10:44:53.243088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.526 qpair failed and we were unable to recover it. 00:36:08.526 [2024-07-14 10:44:53.243222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.243262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.243505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.243534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.243730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.243758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.243936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.243966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.244204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.244243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.244468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.244497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.244624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.244654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.244844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.244873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.245067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.245096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.245275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.245306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.245509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.245543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.245668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.245698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.245811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.245840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.246031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.246061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.246331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.246362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.246537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.246566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.246758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.246787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.247040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.247070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.247336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.247366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.247486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.247515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.247700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.247730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.247862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.247892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.248131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.248160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.248430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.248460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.248589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.248618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.248907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.248937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.249052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.249082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.249257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.249287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.249461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.249490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.249700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.249730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.249938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.249967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.250236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.250267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.250535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.250564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.250756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.250786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.250976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.251005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.251267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.251297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.251560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.251589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.251725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.251758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.252021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.252051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.252262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.252292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.527 [2024-07-14 10:44:53.252434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.527 [2024-07-14 10:44:53.252465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.527 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.252658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.252688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.252933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.252963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.253208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.253249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.253441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.253471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.253657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.253686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.253799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.253829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.254068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.254098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.254363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.254393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.254666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.254696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.254889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.254918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.255167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.255197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.255323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.255354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.255611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.255641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.255892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.255922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.256113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.256142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.256344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.256374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.256511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.256541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.256714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.256743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.257030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.257059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.257192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.257221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.257352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.257382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.257623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.257653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.257855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.257885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.258191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.258221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.258425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.258455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.258587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.258617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.258792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.258821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.258959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.258989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.259177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.259207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.259438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.259469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.259644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.259674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.259862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.259891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.259992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.260022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.260118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.260148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.260352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.260384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.528 [2024-07-14 10:44:53.260589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.528 [2024-07-14 10:44:53.260619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.528 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.260805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.260839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.260958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.260987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.261282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.261312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.261575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.261604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.261730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.261760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.261888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.261918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.262034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.262063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.262258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.262289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.262502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.262532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.262823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.262852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.263111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.263141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.263280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.263310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.263551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.263580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.263832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.263861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.264070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.264100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.264294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.264324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.264527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.264556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.264840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.264869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.265086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.265116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.265317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.265347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.265470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.265500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.265696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.265726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.265980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.266009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.266208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.266245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.266461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.266490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.266720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.266750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.266945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.266975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.267182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.267211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.267364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.267394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.267598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.267628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.267808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.267837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.268027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.268057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.268248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.268278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.268412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.268442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.268620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.268651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.268766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.268795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.268986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.269016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.269126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.269155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.269388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.269418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.269601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.269630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.529 [2024-07-14 10:44:53.269874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.529 [2024-07-14 10:44:53.269908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.529 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.270111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.270141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.270337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.270367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.270566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.270596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.270723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.270752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.270874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.270903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.271030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.271060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.271190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.271219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.271363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.271392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.271525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.271555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.271723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.271753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.271926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.271955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.272201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.272240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.272426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.272455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.272574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.272603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.272874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.272904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.273156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.273185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.273449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.273478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.273699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.273728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.273847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.273877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.274140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.274169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.274460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.274491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.274680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.274710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.274845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.274874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.275013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.275043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.275173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.275203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.275407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.275439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.275644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.275673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.275863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.275893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.276016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.276046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.276263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.276294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.276431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.276461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.276648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.276677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.276808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.276838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.277010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.277040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.277179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.277207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.277398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.277430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.277620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.277649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.277766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.277796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.278052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.278082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.278271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.278306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.530 qpair failed and we were unable to recover it. 00:36:08.530 [2024-07-14 10:44:53.278574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.530 [2024-07-14 10:44:53.278604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.278875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.278904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.279095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.279124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.279384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.279414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.279594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.279624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.279831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.279861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.280080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.280109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.280379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.280410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.280544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.280573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.280709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.280739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.281010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.281040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.281239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.281270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.281380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.281409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.281604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.281633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.281823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.281852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.282043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.282073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.282285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.282315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.282504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.282534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.282774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.282804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.283000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.283030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.283174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.283204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.283338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.283368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.283546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.283577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.283719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.283748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.283962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.283992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.284200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.284240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.284387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.284417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.284658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.284689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.284935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.284965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.285151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.285181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.285345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.285378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.285624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.285655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.285770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.285800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.285908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.285938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.286131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.286161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.286361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.286393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.286609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.286638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.286848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.286877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.287150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.287180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.287377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.287413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.287563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.531 [2024-07-14 10:44:53.287595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.531 qpair failed and we were unable to recover it. 00:36:08.531 [2024-07-14 10:44:53.287858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.287889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.288016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.288047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.288246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.288277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.288399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.288429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.288616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.288646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.288841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.288871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.289080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.289110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.289237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.289269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.289455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.289485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.289690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.289721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.289912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.289941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.290085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.290115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.290320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.290351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.290477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.290507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.290627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.290657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.290790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.290819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.290942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.290973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.291099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.291128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.291322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.291353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.291535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.291565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.291751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.291781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.291964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.291994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.292123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.292152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.292286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.292317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.292520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.292550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.292693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.292724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.292907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.292940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.293067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.293097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.293341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.293371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.293573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.293605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.293744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.293774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.293964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.293993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.294185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.294214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.294406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.294436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.294571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.294601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.294801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.294831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.294938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.294968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.295150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.295180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.295336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.295373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.295650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.295680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.532 [2024-07-14 10:44:53.295884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.532 [2024-07-14 10:44:53.295914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.532 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.296105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.296136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.296334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.296364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.296553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.296583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.296781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.296810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.296986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.297016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.297145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.297174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.297383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.297414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.297603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.297632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.297761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.297791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.297917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.297946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.298194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.298245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.298454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.298484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.298693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.298723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.298976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.299006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.299141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.299170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.299307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.299338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.299456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.299487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.299665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.299694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.299821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.299851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.299980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.300009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.300132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.300162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.300291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.300323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.300498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.300527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.300716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.300745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.300996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.301026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.301209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.301249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.301380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.301409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.301614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.301644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.301774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.301804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.301933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.301962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.302164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.302193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.302427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.302458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.302559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.302589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.302810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.302839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.303021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.533 [2024-07-14 10:44:53.303050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.533 qpair failed and we were unable to recover it. 00:36:08.533 [2024-07-14 10:44:53.303159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.303189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.303444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.303474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.303674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.303709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.303982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.304011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.304192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.304222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.304362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.304392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.304514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.304543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.304786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.304815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.304929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.304959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.305201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.305240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.305447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.305476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.305592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.305621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.305812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.305841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.305955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.305984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.306122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.306151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.306422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.306453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.306647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.306677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.306788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.306818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.306954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.306984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.307105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.307134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.307378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.307409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.307597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.307628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.307769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.307798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.307918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.307948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.308128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.308158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.308359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.308389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.308516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.308545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.308658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.308687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.308802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.308832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.309028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.309058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.309250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.309281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.309501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.309530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.309648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.309678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.309797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.309827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.309933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.309963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.310073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.310103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.310302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.310332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.310460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.310490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.310687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.310716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.310838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.310867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.534 [2024-07-14 10:44:53.310974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.534 [2024-07-14 10:44:53.311004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.534 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.311291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.311321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.311447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.311483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.311679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.311709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.311829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.311859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.312043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.312072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.312212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.312250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.312361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.312390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.312569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.312598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.312849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.312878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.313098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.313127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.313319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.313349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.313476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.313506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.313707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.313737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.313932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.313962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.314210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.314252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.314456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.314486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.314603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.314632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.314811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.314841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.315019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.315049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.315297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.315328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.315518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.315547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.315722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.315752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.316014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.316044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.316181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.316211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.316405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.316435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.316632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.316662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.316807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.316837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.316957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.316987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.317123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.317170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.317392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.317423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.317608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.317639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.317882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.317912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.318161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.318190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.318326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.318357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.318478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.318508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.318634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.318664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.318870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.318900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.319076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.319105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.319297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.319328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.319503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.319533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.535 qpair failed and we were unable to recover it. 00:36:08.535 [2024-07-14 10:44:53.319724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.535 [2024-07-14 10:44:53.319754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.319977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.320015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.320127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.320157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.320330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.320359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.320502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.320532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.320640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.320669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.320859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.320889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.321019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.321050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.321233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.321263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.321386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.321416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.321598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.321628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.321826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.321855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.322045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.322075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.322293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.322323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.322510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.322540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.322662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.322692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.322896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.322926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.323188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.323217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.323403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.323433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.323649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.323679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.323892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.323922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.324044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.324074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.324273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.324304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.324495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.324525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.324711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.324741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.324869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.324899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.325163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.325193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.325447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.325479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.325744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.325774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.325968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.325998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.326182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.326212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.326337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.326367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.326544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.326574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.326817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.326846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.327092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.327122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.327414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.327444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.327583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.327613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.327810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.327841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.328053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.328083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.328207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.328247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.328464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.536 [2024-07-14 10:44:53.328494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.536 qpair failed and we were unable to recover it. 00:36:08.536 [2024-07-14 10:44:53.328694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.328729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.328837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.328867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.328979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.329008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.329197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.329235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.329421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.329451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.329701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.329731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.329864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.329894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.330143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.330173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.330333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.330364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.330607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.330637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.330811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.330841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.331112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.331143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.331389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.331419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.331631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.331661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.331930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.331960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.332203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.332242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.332415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.332445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.332595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.332625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.332837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.332867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.333046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.333076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.333287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.333318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.333511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.333541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.333665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.333695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.333877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.333908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.334167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.334197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.334421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.334456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.334702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.334732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.334996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.335046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.335322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.335357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.335496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.335527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.335646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.335676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.335887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.335918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.336062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.336091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.336335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.336366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.336502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.336532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.537 qpair failed and we were unable to recover it. 00:36:08.537 [2024-07-14 10:44:53.336737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.537 [2024-07-14 10:44:53.336767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.336893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.336923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.337106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.337136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.337259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.337290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.337491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.337521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.337643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.337673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.337867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.337897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.338087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.338117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.338248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.338280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.338459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.338490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.338677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.338707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.338958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.338988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.339186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.339216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.339414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.339444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.339709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.339739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.339873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.339903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.340077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.340107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.340300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.340331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.340465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.340495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.340678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.340713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.340889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.340918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.341163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.341193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.341390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.341421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.341663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.341693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.341866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.341896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.342126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.342155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.342410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.342442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.342665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.342704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.342822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.342852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.343025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.343055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.343183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.343213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.343490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.343519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.343693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.343722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.343939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.343969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.344144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.344174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.344436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.344469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.344735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.344765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.345017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.345047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.345182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.345211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.345410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.345442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.345655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.345685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.538 [2024-07-14 10:44:53.345927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.538 [2024-07-14 10:44:53.345958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.538 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.346091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.346122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.346311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.346343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.346560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.346589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.346764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.346794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.347011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.347046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.347234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.347264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.347389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.347419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.347629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.347658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.347862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.347892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.348135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.348166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.348299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.348330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.348508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.348538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.348730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.348760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.348965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.348995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.349240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.349271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.349456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.349487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.349758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.349787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.349978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.350008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.350187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.350217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.350502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.350533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.350800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.350830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.351010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.351040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.351249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.351280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.351454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.351484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.351624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.351654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.351845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.351875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.352007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.352037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.352236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.352267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.352390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.352420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.352618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.352647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.352826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.352856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.352979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.353015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.353120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.353150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.353397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.353428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.353544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.353574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.353701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.353730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.354002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.354031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.354153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.354184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.354444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.354475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.354662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.354692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.539 [2024-07-14 10:44:53.354983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.539 [2024-07-14 10:44:53.355013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.539 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.355266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.355297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.355476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.355506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.355750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.355780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.355967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.355997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.356144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.356174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.356335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.356367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.356575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.356605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.356796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.356826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.356933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.356964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.357177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.357207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.357415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.357445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.357633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.357662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.357926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.357956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.358068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.358099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.358222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.358274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.358451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.358480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.358681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.358711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.358953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.358983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.359178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.359208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.359463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.359494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.359756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.359786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.359999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.360029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.360280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.360311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.360487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.360517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.360783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.360813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.361060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.361090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.361337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.361368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.361486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.361516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.361733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.361763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.361941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.361971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.362162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.362192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.362398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.362429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.362633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.362663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.362902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.362932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.363143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.363173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.363396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.363427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.363544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.363574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.363763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.363793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.363912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.363942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.364141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.364175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.540 [2024-07-14 10:44:53.364385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.540 [2024-07-14 10:44:53.364416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.540 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.364610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.364640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.364819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.364848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.365042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.365072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.365258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.365289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.365493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.365523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.365658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.365688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.365882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.365913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.366086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.366116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.366312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.366343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.366459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.366489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.366742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.366771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.366902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.366932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.367129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.367159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.367398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.367428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.367610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.367640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.367912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.367942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.368145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.368174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.368455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.368491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.368701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.368730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.368991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.369021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.369267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.369299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.369480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.369509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.369728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.369757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.369946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.369976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.370170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.370199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.370427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.370468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.370657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.370687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.370892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.370922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.371095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.371126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.371335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.371368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.371552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.371583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.371783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.371813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.372009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.372038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.372153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.372183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.372381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.372412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.541 [2024-07-14 10:44:53.372551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.541 [2024-07-14 10:44:53.372581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.541 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.372797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.372826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.373017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.373047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.373276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.373306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.373519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.373549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.373751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.373781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.374037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.374067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.374214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.374253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.374495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.374524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.374729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.374760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.374946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.374976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.375098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.375127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.375298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.375329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.375460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.375489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.375758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.375788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.375908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.375938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.376149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.376179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.376301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.376332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.376524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.376554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.376763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.376793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.376918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.376949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.377196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.377233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.377415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.377450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.377627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.377656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.377861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.377891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.378084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.378114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.378353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.378384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.378584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.378614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.378804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.378834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.379075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.379105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.379299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.379331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.379604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.379635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.379769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.379799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.380056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.380085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.380260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.380291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.380429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.380460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.380603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.380633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.380757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.380788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.380965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.380994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.381240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.381271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.542 [2024-07-14 10:44:53.381438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.542 [2024-07-14 10:44:53.381469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.542 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.381701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.381730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.381953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.381983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.382124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.382154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.382290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.382321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.382566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.382596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.382775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.382805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.382991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.383021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.383129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.383158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.383392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.383430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.383545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.383575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.383718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.383748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.383925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.383955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.384204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.384246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.384519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.384549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.384761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.384791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.385032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.385062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.385258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.385289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.385563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.385592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.385848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.385878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.386068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.386098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.386221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.386259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.386534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.386570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.386782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.386811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.387053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.387082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.387254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.387285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.387501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.387531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.387715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.387744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.387929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.387959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.388155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.388185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.388319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.388350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.388525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.388555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.388760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.388790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.388913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.388943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.389063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.389093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.389306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.389338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.389539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.389570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.389764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.389792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.389979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.390009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.390142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.390171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.390306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.390336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.543 qpair failed and we were unable to recover it. 00:36:08.543 [2024-07-14 10:44:53.390553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.543 [2024-07-14 10:44:53.390583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.390784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.390814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.391010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.391040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.391240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.391271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.391551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.391580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.391769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.391799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.392032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.392062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.392357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.392387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.392592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.392628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.392821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.392851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.393046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.393076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.393299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.393331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.393480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.393511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.393699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.393729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.393920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.393949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.394197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.394235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.394427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.394457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.394644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.394674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.394811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.394842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.395014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.395043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.395187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.395216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.395449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.395479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.395611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.395641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.395834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.395864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.396056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.396086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.396391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.396421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.396667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.396697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.396889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.396920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.397038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.397068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.397305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.397335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.397581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.397611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.397876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.397906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.398025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.398054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.398185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.398215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.398461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.398491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.398692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.398727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.398929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.398958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.399166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.399195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.399496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.399530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.399737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.399767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.399965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.399995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.544 [2024-07-14 10:44:53.400302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.544 [2024-07-14 10:44:53.400332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.544 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.400519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.400549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.400729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.400758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.401010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.401040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.401256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.401287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.401548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.401579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.401819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.401848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.402039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.402069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.402283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.402314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.402486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.402516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.402707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.402737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.402875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.402905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.403016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.403045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.403177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.403207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.403387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.403417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.403545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.403574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.403706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.403735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.403906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.403935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.404120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.404150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.404347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.404378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.404573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.404603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.404806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.404835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.404978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.405007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.405221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.405268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.405405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.405435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.405631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.405661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.405800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.405829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.406093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.406123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.406371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.406402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.406547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.406577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.406816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.406846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.407112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.407142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.407279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.407309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.407431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.407461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.407674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.407709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.407976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.408005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.408250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.408280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.545 qpair failed and we were unable to recover it. 00:36:08.545 [2024-07-14 10:44:53.408516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.545 [2024-07-14 10:44:53.408546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.408735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.408765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.409008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.409037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.409252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.409283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.409544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.409573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.409692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.409722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.409900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.409930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.410105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.410134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.410374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.410405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.410537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.410566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.410754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.410784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.411055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.411085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.411208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.411245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.411382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.411412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.411621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.411650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.411869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.411899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.412097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.412128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.412330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.412360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.412491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.412521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.412645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.412674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.412856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.412886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.413136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.413166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.413350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.413380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.413646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.413676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.413864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.413894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.414013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.414042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.414183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.414213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.414414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.414445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.414616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.414646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.414829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.414859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.415037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.415066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.415263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.415294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.415417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.415446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.415634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.415664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.415857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.415886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.416124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.416154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.416342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.416372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.416621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.416655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.416790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.546 [2024-07-14 10:44:53.416820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.546 qpair failed and we were unable to recover it. 00:36:08.546 [2024-07-14 10:44:53.417013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.417042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.417164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.417193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.417385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.417415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.417661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.417691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.417913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.417943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.418184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.418213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.418432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.418463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.418579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.418608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.418745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.418775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.418892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.418922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.419096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.419125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.419309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.419340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.419478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.419507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.419685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.419714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.419957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.419987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.420110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.420139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.420323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.420353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.420479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.420508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.420754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.420783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.421066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.421096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.421377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.421407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.421601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.421631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.421770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.421798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.422051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.422080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.422271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.422302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.422514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.422552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.422744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.422774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.423023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.423053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.423275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.423307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.423504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.423534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.423735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.423765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.423874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.423904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.424103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.424133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.424326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.424356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.424503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.424533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.424798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.424828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.424972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.425002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.425247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.425277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.425403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.425438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.425622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.425652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.547 [2024-07-14 10:44:53.425814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.547 [2024-07-14 10:44:53.425845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.547 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.426097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.426127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.426266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.426297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.426410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.426440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.426576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.426606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.426824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.426854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.426977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.427006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.427246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.427277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.427533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.427563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.427753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.427782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.427975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.428005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.428278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.428309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.428577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.428607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.428863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.428892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.429043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.429074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.429285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.429316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.429559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.429589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.429709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.429739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.429927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.429957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.430067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.430097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.430344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.430375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.430557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.430587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.430849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.430879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.431039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.431069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.431254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.431284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.431501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.431532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.431683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.431713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.431954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.431984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.432108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.432137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.432350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.432381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.432560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.432589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.432835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.432864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.432997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.433027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.433288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.433318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.433509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.433539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.433756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.433786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.433959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.433990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.434208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.434343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.434469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.434504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.434675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.434705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.548 [2024-07-14 10:44:53.434827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.548 [2024-07-14 10:44:53.434857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.548 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.435042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.435072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.435255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.435286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.435460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.435490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.435688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.435718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.435911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.435941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.436154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.436184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.436437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.436469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.436647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.436676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.436807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.436836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.437081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.437111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.437331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.437362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.437485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.437515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.437712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.437742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.437918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.437948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.438072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.438102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.438245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.438275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.438471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.438501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.438676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.438706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.438850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.438880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.439067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.439098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.439299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.439330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.439577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.439607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.439785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.439815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.439989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.440018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.440222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.440261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.440535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.440565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.440754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.440784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.441051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.441080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.441204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.441253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.441457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.441486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.441742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.441772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.442032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.442061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.442273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.442303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.442502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.442532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.442734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.442764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.442884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.442913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.443171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.443202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.443400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.443435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.443634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.443664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.443858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.443888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.549 qpair failed and we were unable to recover it. 00:36:08.549 [2024-07-14 10:44:53.444080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.549 [2024-07-14 10:44:53.444109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.550 qpair failed and we were unable to recover it. 00:36:08.550 [2024-07-14 10:44:53.444315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.550 [2024-07-14 10:44:53.444347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.550 qpair failed and we were unable to recover it. 00:36:08.550 [2024-07-14 10:44:53.444528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.550 [2024-07-14 10:44:53.444557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.550 qpair failed and we were unable to recover it. 00:36:08.550 [2024-07-14 10:44:53.444688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.550 [2024-07-14 10:44:53.444718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.550 qpair failed and we were unable to recover it. 00:36:08.550 [2024-07-14 10:44:53.444843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.550 [2024-07-14 10:44:53.444872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.550 qpair failed and we were unable to recover it. 00:36:08.550 [2024-07-14 10:44:53.444990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.550 [2024-07-14 10:44:53.445020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.550 qpair failed and we were unable to recover it. 00:36:08.550 [2024-07-14 10:44:53.445134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.550 [2024-07-14 10:44:53.445163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.550 qpair failed and we were unable to recover it. 00:36:08.550 [2024-07-14 10:44:53.445269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.550 [2024-07-14 10:44:53.445299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.550 qpair failed and we were unable to recover it. 00:36:08.550 [2024-07-14 10:44:53.445438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.550 [2024-07-14 10:44:53.445468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.550 qpair failed and we were unable to recover it. 00:36:08.550 [2024-07-14 10:44:53.445660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.550 [2024-07-14 10:44:53.445689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.550 qpair failed and we were unable to recover it. 00:36:08.550 [2024-07-14 10:44:53.445805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.550 [2024-07-14 10:44:53.445835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.550 qpair failed and we were unable to recover it. 00:36:08.550 [2024-07-14 10:44:53.445968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.550 [2024-07-14 10:44:53.445998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.550 qpair failed and we were unable to recover it. 00:36:08.550 [2024-07-14 10:44:53.446260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.550 [2024-07-14 10:44:53.446290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.550 qpair failed and we were unable to recover it. 00:36:08.550 [2024-07-14 10:44:53.446424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.550 [2024-07-14 10:44:53.446455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.550 qpair failed and we were unable to recover it. 00:36:08.550 [2024-07-14 10:44:53.446584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.550 [2024-07-14 10:44:53.446613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.550 qpair failed and we were unable to recover it. 00:36:08.550 [2024-07-14 10:44:53.446744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.550 [2024-07-14 10:44:53.446774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.550 qpair failed and we were unable to recover it. 00:36:08.550 [2024-07-14 10:44:53.447014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.550 [2024-07-14 10:44:53.447044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.447246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.447276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.447517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.447547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.447727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.447756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.447945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.447976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.448246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.448277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.448465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.448495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.448716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.448746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.448990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.449024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.449140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.449170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.449298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.449329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.449570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.449600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.449774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.449803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.449978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.450008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.450317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.450349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.450633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.450662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.450866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.450896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.451156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.451185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.451370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.451400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.451597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.451627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.451859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.451889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.452084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.452119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.452373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.452403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.452578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.452607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.452796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.452826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.453016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.453046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.453247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.453278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.453402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.453431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.453663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.453693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.453853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.453882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.454016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.454045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.454335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.454366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.454551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.454580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.454723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.454752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.454881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.454911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.455102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.455132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.455278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.455308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.455580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.455609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.455732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.455761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.455965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.455995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.456176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.551 [2024-07-14 10:44:53.456207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.551 qpair failed and we were unable to recover it. 00:36:08.551 [2024-07-14 10:44:53.456457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.456487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.456676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.456706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.456831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.456860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.456991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.457020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.457137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.457166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.457375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.457406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.457697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.457727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.457961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.458004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.458241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.458273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.458391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.458421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.458648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.458677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.458896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.458925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.459171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.459201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.459322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.459352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.459531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.459561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.459755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.459785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.459976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.460005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.460186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.460216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.460424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.460454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.460631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.460660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.460850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.460880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.461066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.461096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.461337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.461368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.461554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.461584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.461846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.461875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.462117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.462147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.462279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.462310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.462582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.462612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.462884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.462913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.463108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.463137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.463330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.463360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.463555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.463584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.463837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.463867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.464135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.464167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.464299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.464336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.464532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.464561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.464688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.464718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.464923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.464954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.465214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.465255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.465475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.465505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.465683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.552 [2024-07-14 10:44:53.465712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.552 qpair failed and we were unable to recover it. 00:36:08.552 [2024-07-14 10:44:53.465925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.465955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.466076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.466107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.466316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.466347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.466526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.466557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.466747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.466776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.466979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.467009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.467145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.467175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.467372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.467404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.467597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.467627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.467907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.467936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.468057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.468087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.468291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.468322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.468516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.468547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.468755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.468784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.468921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.468951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.469066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.469096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.469263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.469294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.469561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.469591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.469714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.469744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.470000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.470029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.470221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.470269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.470402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.470432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.470615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.470645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.470921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.470951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.471199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.471238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.471417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.471446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.471662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.471691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.471962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.471992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.472203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.472242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.472486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.472516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.472760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.472790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.472930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.472960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.473151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.473181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.473399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.473431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.473679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.473709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.473951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.473981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.474256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.474288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.474483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.474513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.474667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.474697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.474967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.474997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.475128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.553 [2024-07-14 10:44:53.475158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.553 qpair failed and we were unable to recover it. 00:36:08.553 [2024-07-14 10:44:53.475399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.475430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.475631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.475662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.475909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.475948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.476161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.476191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.476392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.476423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.476644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.476675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.476811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.476841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.477030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.477060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.477356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.477387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.477567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.477597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.477744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.477775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.478016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.478045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.478241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.478272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.478451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.478482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.478603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.478633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.478853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.478883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.479155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.479185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.479308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.479339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.479466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.479496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.479707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.479737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.479880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.479922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.480136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.480169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.480352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.480383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.480514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.480543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.480721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.480750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.481022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.481052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.481175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.481203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.481332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.481362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.481630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.481659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.481927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.481956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.482133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.482162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.482422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.482452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.482646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.482675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.482862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.482897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.483041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.483071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.483244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.483275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.483373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.483402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.483672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.483702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.483890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.483919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.484045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.554 [2024-07-14 10:44:53.484075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.554 qpair failed and we were unable to recover it. 00:36:08.554 [2024-07-14 10:44:53.484271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.555 [2024-07-14 10:44:53.484302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-07-14 10:44:53.484572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.555 [2024-07-14 10:44:53.484602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-07-14 10:44:53.484822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.555 [2024-07-14 10:44:53.484853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-07-14 10:44:53.485026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.555 [2024-07-14 10:44:53.485055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-07-14 10:44:53.485233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.555 [2024-07-14 10:44:53.485264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-07-14 10:44:53.485395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.555 [2024-07-14 10:44:53.485425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-07-14 10:44:53.485595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.555 [2024-07-14 10:44:53.485624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-07-14 10:44:53.485728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.555 [2024-07-14 10:44:53.485758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-07-14 10:44:53.485942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.555 [2024-07-14 10:44:53.485972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-07-14 10:44:53.486162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.555 [2024-07-14 10:44:53.486192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-07-14 10:44:53.486400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.555 [2024-07-14 10:44:53.486431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-07-14 10:44:53.486560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.555 [2024-07-14 10:44:53.486589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-07-14 10:44:53.486828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.555 [2024-07-14 10:44:53.486857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-07-14 10:44:53.487118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.555 [2024-07-14 10:44:53.487148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-07-14 10:44:53.487409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.555 [2024-07-14 10:44:53.487440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-07-14 10:44:53.487720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.555 [2024-07-14 10:44:53.487749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-07-14 10:44:53.487887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.555 [2024-07-14 10:44:53.487916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-07-14 10:44:53.488047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.555 [2024-07-14 10:44:53.488076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-07-14 10:44:53.488258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.555 [2024-07-14 10:44:53.488288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-07-14 10:44:53.488432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.555 [2024-07-14 10:44:53.488463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-07-14 10:44:53.488659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.555 [2024-07-14 10:44:53.488696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-07-14 10:44:53.488901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.555 [2024-07-14 10:44:53.488931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-07-14 10:44:53.489075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.555 [2024-07-14 10:44:53.489105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-07-14 10:44:53.489240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.555 [2024-07-14 10:44:53.489271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-07-14 10:44:53.489411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.555 [2024-07-14 10:44:53.489441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-07-14 10:44:53.489576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.555 [2024-07-14 10:44:53.489606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-07-14 10:44:53.489874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.555 [2024-07-14 10:44:53.489904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-07-14 10:44:53.490129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.555 [2024-07-14 10:44:53.490159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.490299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.490330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.490518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.490549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.490687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.490717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.491007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.491037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.491166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.491196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.491393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.491431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.491674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.491704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.491810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.491840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.492034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.492064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.492249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.492280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.492523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.492553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.492685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.492715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.492910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.492941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.493050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.493080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.493326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.493357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.493496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.493526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.493636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.493666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.493795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.493825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.494067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.494097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.494324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.494355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.494565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.494595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.494784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.494814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.495005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.495035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.495176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.495207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.495339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.495369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.495629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.495659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.495854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.495884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.496069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.496100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.496309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.496342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.496595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.496625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.496757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.496787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.496964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.496993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.497112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.497142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.497282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.497313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.497553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.497583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.497765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.497794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.497988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.498018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.498142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.498172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.498438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.498469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.498566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.498596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.498842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.498871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.499079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.499108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.499322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.499354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.499595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.499625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.499806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.499836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.500040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.500075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.500263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.500295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.500432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.500462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.500704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.500734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.500926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.500956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.501221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.501260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.501497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.501527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.501703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.501732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.502001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.502031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.502249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.502279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.502420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.502450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.502572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.502602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.502772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.502802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.502941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.502971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.503259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.503291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.503534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.503564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.503697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.503727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.503906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.503936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.504178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.504208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.504415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.504445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.504627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.504656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.504790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.504820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.504945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.504975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.505099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.505130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.505270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.505301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.505478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.505508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.505780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.505811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.505939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.505980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.506170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.506200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.506460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.506519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.506834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.506870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.507119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.507149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.507279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.507311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.507554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.507584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.507789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.507818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.508032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.508061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.508242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.508273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.508521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.508551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.508736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.508766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.508906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.508936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.509116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.509146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.509346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.509377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.509562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.509592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.509705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.509735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.510001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.510031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.510321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.510352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.510473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.510502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.835 qpair failed and we were unable to recover it. 00:36:08.835 [2024-07-14 10:44:53.510696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.835 [2024-07-14 10:44:53.510725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.510965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.510995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.511118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.511148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.511332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.511363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.511607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.511637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.511853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.511883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.512146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.512176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.512452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.512488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.512680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.512711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.512839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.512869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.513057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.513087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.513234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.513265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.513373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.513402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.513692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.513723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.513919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.513949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.514143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.514173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.514387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.514418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.514635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.514665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.514778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.514808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.514981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.515011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.515184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.515214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.515416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.515447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.515651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.515681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.515853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.515883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.516076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.516106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.516283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.516315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.516506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.516537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.516805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.516835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.516980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.517010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.517223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.517262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.517522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.517552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.517686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.517717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.517909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.517939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.518115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.518145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.518354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.518390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.518582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.518612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.518729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.518759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.518873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.518904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.519087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.519117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.519361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.519392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.519657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.519687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.519935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.519966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.520143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.520173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.520311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.520342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.520479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.520509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.520750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.520780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.520992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.521022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.521245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.521276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.521407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.521438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.521688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.521718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.521831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.521861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.522003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.522033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.522274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.522306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.522437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.522468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.522665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.522694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.522883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.522913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.523094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.523123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.836 [2024-07-14 10:44:53.523320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.836 [2024-07-14 10:44:53.523350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.836 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.523603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.523632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.523892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.523922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.524128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.524158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.524351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.524382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.524628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.524658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.524833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.524863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.525133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.525162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.525274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.525305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.525499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.525529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.525767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.525797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.526056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.526087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.526332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.526363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.526550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.526581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.526770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.526800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.527068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.527097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.527211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.527249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.527425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.527455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.527605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.527641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.527908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.527938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.528151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.528181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.528317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.528348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.528540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.528571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.528716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.528746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.528944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.528974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.529158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.529188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.529378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.529408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.529547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.529576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.529762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.529792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.530071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.530100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.530214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.530254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.530506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.530542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.530788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.530817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.530919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.530949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.837 [2024-07-14 10:44:53.531092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.837 [2024-07-14 10:44:53.531122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.837 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.531287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.531319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.531495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.531525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.531745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.531775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.531962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.531992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.532178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.532208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.532399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.532430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.532556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.532586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.532771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.532801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.533054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.533084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.533261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.533291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.533484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.533513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.533729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.533759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.533882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.533912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.534096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.534126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.534315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.534346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.534520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.534550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.534732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.534762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.534941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.534971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.535212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.535251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.535491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.535521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.535767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.535796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.535977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.536007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.536272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.536303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.536556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.536587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.536776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.536806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.536982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.537012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.537205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.537241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.537434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.537463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.537648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.537677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.537869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.537899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.538025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.538054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.538191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.538221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.538362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.538393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.538586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.538615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.538746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.538775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.538909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.538938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.539129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.539163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.539357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.539388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.539593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.539623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.539764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.539794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.838 [2024-07-14 10:44:53.539985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.838 [2024-07-14 10:44:53.540014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.838 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.540152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.540182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.540437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.540468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.540659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.540689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.540995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.541025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.541213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.541252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.541528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.541557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.541757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.541787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.541892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.541922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.542109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.542138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.542340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.542371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.542560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.542591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.542854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.542883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.543014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.543043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.543237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.543268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.543464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.543493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.543672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.543702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.543890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.543920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.544211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.544249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.544379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.544410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.544588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.544617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.544861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.544891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.545173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.545203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.545530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.545576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.545779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.545814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.546067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.546096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.546235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.546265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.546507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.546536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.546758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.546788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.546916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.546945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.547208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.547253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.547433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.547463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.547662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.547692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.547891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.547921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.548115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.548145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.548283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.548313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.548502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.548532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.548832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.548863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.549051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.549080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.549268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.549299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.549447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.549476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.839 [2024-07-14 10:44:53.549671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.839 [2024-07-14 10:44:53.549701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.839 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.549893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.549923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.550117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.550147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.550342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.550372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.550613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.550642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.550833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.550863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.551107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.551137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.551271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.551301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.551441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.551471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.551596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.551631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.551756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.551785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.552000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.552030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.552259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.552289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.552530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.552559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.552737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.552767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.552893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.552923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.553142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.553172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.553429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.553459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.553633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.553663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.553802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.553830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.553957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.553987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.554114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.554143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.554259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.554289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.554542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.554572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.554689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.554718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.554919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.554949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.555060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.555090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.555216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.555255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.555430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.555459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.555601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.555631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.555826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.555855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.556048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.556078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.556193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.556222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.556448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.556477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.556691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.556720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.556967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.556997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.557288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.557325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.557459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.557487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.557631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.557661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.557836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.557866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.558134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.558163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.558299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.840 [2024-07-14 10:44:53.558330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.840 qpair failed and we were unable to recover it. 00:36:08.840 [2024-07-14 10:44:53.558471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.558500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.558691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.558720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.558842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.558872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.559014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.559044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.559311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.559342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.559550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.559580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.559717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.559747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.560014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.560044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.560247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.560278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.560467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.560497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.560690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.560719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.560848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.560878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.561061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.561091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.561343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.561373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.561558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.561588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.561852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.561881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.562062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.562092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.562306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.562336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.562454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.562484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.562631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.562660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.562838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.562867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.563068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.563103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.563257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.563289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.563400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.563430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.563630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.563659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.563847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.563876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.564002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.564032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.564222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.564265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.564513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.564542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.564727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.564757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.564942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.564972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.565159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.565188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.565448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.565478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.565617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.565647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.565769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.565799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.565926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.841 [2024-07-14 10:44:53.565956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.841 qpair failed and we were unable to recover it. 00:36:08.841 [2024-07-14 10:44:53.566153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.566183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.566434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.566465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.566654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.566684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.566871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.566901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.567034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.567064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.567290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.567321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.567506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.567535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.567725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.567755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.567869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.567899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.568035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.568064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.568312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.568343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.568481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.568511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.568698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.568728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.568932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.568962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.569240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.569271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.569450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.569480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.569707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.569737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.569999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.570029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.570164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.570194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.570413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.570444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.570622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.570653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.570761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.570790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.570974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.571004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.571270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.571302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.571479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.571509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.571682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.571711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.571899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.571933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.572059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.572088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.572281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.572312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.572425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.572455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.572587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.572617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.572832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.572862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.573046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.573076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.573342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.573373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.573643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.573673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.573807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.573836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.574015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.574045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.574176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.574206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.574395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.574426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.574615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.574645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.574862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.574892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.842 qpair failed and we were unable to recover it. 00:36:08.842 [2024-07-14 10:44:53.575023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.842 [2024-07-14 10:44:53.575053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.575244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.575274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.575456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.575485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.575599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.575629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.575827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.575857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.575994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.576024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.576208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.576248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.576446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.576476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.576596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.576626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.576755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.576785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.576923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.576952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.577075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.577105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.577309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.577346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.577466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.577496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.577687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.577716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.577917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.577948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.578219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.578258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.578471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.578501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.578627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.578656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.578844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.578874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.579004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.579034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.579212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.579264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.579459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.579488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.579670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.579699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.579828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.579857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.580131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.580161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.580350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.580381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.580628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.580658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.580766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.580795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.580992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.581021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.581126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.581156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.581348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.581380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.581479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.581508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.581709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.581738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.581930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.581959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.582085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.582114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.582408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.582438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.582575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.582605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.582776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.582807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.582984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.583024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.583271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.583302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.843 qpair failed and we were unable to recover it. 00:36:08.843 [2024-07-14 10:44:53.583473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.843 [2024-07-14 10:44:53.583503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.583682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.583712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.584003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.584033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.584156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.584185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.584438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.584469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.584726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.584756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.584877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.584906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.585109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.585139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.585304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.585335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.585512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.585541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.585709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.585739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.585877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.585905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.586163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.586193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.586449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.586480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.586664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.586694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.586934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.586963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.587179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.587208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.587452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.587482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.587680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.587709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.587838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.587868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.588053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.588083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.588282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.588313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.588438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.588468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.588709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.588738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.588909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.588939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.589089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.589118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.589379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.589411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.589704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.589734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.589930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.589960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.590222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.590259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.590500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.590529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.590824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.590854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.590987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.591016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.591281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.591311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.591440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.591469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.591710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.591739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.591980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.592010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.592280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.592310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.592594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.592624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.592784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.592823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.592970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.593000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.844 [2024-07-14 10:44:53.593245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.844 [2024-07-14 10:44:53.593277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.844 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.593391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.593420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.593558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.593588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.593837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.593867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.594057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.594086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.594282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.594313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.594509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.594540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.594665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.594695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.594964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.594995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.595141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.595171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.595308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.595339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.595484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.595521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.595698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.595727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.595963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.595994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.596214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.596255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.596375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.596405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.596505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.596535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.596725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.596755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.596953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.596982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.597109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.597139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.597349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.597382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.597503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.597533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.597721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.597751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.597883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.597914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.598131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.598161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.598354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.598386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.598513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.598542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.598662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.598692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.598870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.598900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.599040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.599070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.599268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.599299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.599426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.599456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.599586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.599617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.599807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.599837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.600042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.600072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.600184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.600214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.600432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.600462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.600774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.600804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.601062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.601093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.601366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.601397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.601523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.601553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.845 [2024-07-14 10:44:53.601757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.845 [2024-07-14 10:44:53.601787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.845 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.601977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.602006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.602189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.602219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.602449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.602480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.602618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.602649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.602786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.602818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.603111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.603141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.603285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.603316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.603446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.603476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.603626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.603656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.603770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.603804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.604013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.604043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.604180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.604209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.604408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.604438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.604563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.604593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.604781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.604811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.605080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.605109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.605289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.605320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.605521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.605551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.605844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.605874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.605996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.606026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.606181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.606212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.606473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.606504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.606631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.606660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.606936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.606967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.607165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.607194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.607322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.607356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.607475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.607506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.607639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.607669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.607795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.607825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.608011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.608041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.608291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.608322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.608435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.608465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.608695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.608725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.608901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.608933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.609060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.609089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.846 [2024-07-14 10:44:53.609285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.846 [2024-07-14 10:44:53.609316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.846 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.609494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.609530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.609719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.609750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.609880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.609910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.610151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.610181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.610442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.610472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.610612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.610642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.610764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.610793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.610969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.610999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.611193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.611232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.611464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.611495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.611810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.611841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.612048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.612078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.612353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.612384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.612562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.612592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.612778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.612808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.612929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.612959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.613147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.613176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.613315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.613346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.613522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.613552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.613768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.613798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.613974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.614005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.614192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.614221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.614366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.614396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.614593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.614624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.614803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.614833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.614958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.614988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.615132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.615163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.615454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.615489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.615687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.615717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.615909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.615939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.616123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.616152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.616343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.616374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.616556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.616587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.616709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.616738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.616863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.616892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.617072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.617111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.617242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.617273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.617414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.617444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.617691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.617721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.617828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.617858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.618037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.847 [2024-07-14 10:44:53.618068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.847 qpair failed and we were unable to recover it. 00:36:08.847 [2024-07-14 10:44:53.618198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.618235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.618424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.618454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.618567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.618597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.618779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.618809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.618988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.619018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.619210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.619251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.619392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.619423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.619613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.619643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.619855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.619884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.620026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.620056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.620258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.620290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.620560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.620590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.620771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.620801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.620922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.620952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.621088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.621118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.621304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.621335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.621470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.621500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.621717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.621746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.621943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.621973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.622091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.622121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.622243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.622273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.622485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.622515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.622691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.622721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.622832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.622862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.623057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.623088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.623358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.623389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.623577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.623607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.623867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.623898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.624030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.624061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.624261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.624292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.624552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.624582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.624839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.624869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.624979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.625008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.625142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.625173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.625358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.625389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.625571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.625601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.625723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.625754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.625922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.625952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.626075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.626105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.626310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.626341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.626462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.626492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.848 qpair failed and we were unable to recover it. 00:36:08.848 [2024-07-14 10:44:53.626681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.848 [2024-07-14 10:44:53.626711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.626818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.626848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.626959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.626988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.627180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.627210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.627331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.627362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.627551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.627582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.627767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.627796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.627997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.628026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.628203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.628252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.628429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.628459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.628585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.628615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.628800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.628830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.628937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.628967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.629130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.629165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.629372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.629403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.629594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.629624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.629813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.629842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.629957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.629988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.630155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.630186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.630335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.630366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.630493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.630523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.630646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.630675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.630858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.630889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.631146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.631177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.631308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.631339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.631482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.631513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.631648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.631678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.631791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.631821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.632116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.632147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.632330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.632361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.632563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.632594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.632707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.632737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.632893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.632923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.633054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.633084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.633272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.633303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.633433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.633465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.633588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.633618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.633939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.633969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.634210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.634251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.634398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.634427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.634627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.634662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.849 [2024-07-14 10:44:53.634778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.849 [2024-07-14 10:44:53.634809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.849 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.634927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.634958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.635148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.635179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.635391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.635423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.635621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.635651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.635763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.635793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.635933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.635963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.636149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.636179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.636327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.636358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.636547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.636577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.636749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.636778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.636961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.636991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.637194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.637223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.637379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.637409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.637616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.637646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.637822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.637851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.638031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.638060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.638249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.638281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.638411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.638443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.638548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.638578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.638832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.638861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.639116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.639148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.639282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.639313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.639514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.639544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.639808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.639838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.640032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.640061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.640283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.640315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.640454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.640484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.640662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.640691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.640792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.640822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.640945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.640974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.641167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.641197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.641383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.641414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.641651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.641681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.641942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.641973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.642103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.642132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.850 qpair failed and we were unable to recover it. 00:36:08.850 [2024-07-14 10:44:53.642335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.850 [2024-07-14 10:44:53.642366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.642502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.642532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.642644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.642674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.642785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.642815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.642938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.642971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.643154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.643184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.643318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.643350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.643545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.643575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.643711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.643743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.643871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.643900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.644006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.644035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.644279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.644311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.644507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.644537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.644728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.644758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.644871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.644900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.645090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.645120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.645313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.645345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.645493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.645530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.645645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.645677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.645867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.645898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.646004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.646033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.646243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.646274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.646491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.646520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.646650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.646681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.646819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.646848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.647044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.647074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.647239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.647270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.647411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.647441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.647564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.647594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.647701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.647730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.647843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.647873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.648064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.648095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.648285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.648316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.648442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.648472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.648677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.648708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.648841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.648872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.648991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.649022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.649149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.649179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.649322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.649352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.649526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.649556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.649659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.851 [2024-07-14 10:44:53.649689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.851 qpair failed and we were unable to recover it. 00:36:08.851 [2024-07-14 10:44:53.649896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.649925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.650049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.650079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.650192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.650222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.650535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.650566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.650694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.650724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.650833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.650863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.650994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.651024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.651151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.651181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.651300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.651329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.651444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.651474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.651586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.651615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.651733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.651762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.651890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.651920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.652054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.652083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.652200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.652242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.652371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.652401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.652507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.652542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.652647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.652677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.652805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.652834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.653030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.653059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.653245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.653276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.653390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.653420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.653537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.653567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.653677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.653707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.653914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.653944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.654065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.654096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.654308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.654339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.654446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.654476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.654648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.654677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.654902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.654931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.655065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.655098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.655213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.655252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.655370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.655400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.655516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.655545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.655678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.655708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.655906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.655936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.656051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.656081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.656188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.656218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.656423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.656454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.656624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.656655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.656827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.852 [2024-07-14 10:44:53.656857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.852 qpair failed and we were unable to recover it. 00:36:08.852 [2024-07-14 10:44:53.656978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.657007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.657128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.657157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.657305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.657348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.657529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.657566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.657817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.657847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.657967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.657997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.658109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.658138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.658255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.658287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.658482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.658512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.658641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.658671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.658793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.658823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.658948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.658978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.659081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.659111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.659310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.659341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.659526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.659556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.659666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.659696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.659890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.659922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.660053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.660084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.660192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.660223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.660341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.660371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.660501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.660532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.660806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.660836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.660954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.660985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.661097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.661127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.661327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.661358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.661497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.661527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.661645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.661675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.661873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.661904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.662086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.662117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.662329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.662366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.662533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.662563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.662681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.662711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.662916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.662947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.663078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.663108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.663219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.663258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.663380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.663410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.663681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.663714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.663847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.663877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.663999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.664029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.664143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.664173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.664379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.664414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.853 [2024-07-14 10:44:53.664534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.853 [2024-07-14 10:44:53.664565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.853 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.664685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.664715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.664858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.664889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.665011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.665041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.665167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.665197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.665335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.665367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.665569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.665600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.665718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.665748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.665870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.665900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.666039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.666068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.666174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.666204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.666334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.666366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.666490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.666519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.666636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.666666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.666862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.666892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.667063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.667099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.667214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.667256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.667374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.667404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.667533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.667562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.667702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.667732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.667871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.667901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.668082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.668112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.668298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.668329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.668525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.668556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.668730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.668760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.668870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.668900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.669148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.669177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.669367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.669398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.669583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.669613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.669875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.669905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.670083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.670113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.670236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.670267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.670377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.670406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.670531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.670561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.670823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.670853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.670976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.671006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.671203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.854 [2024-07-14 10:44:53.671250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.854 qpair failed and we were unable to recover it. 00:36:08.854 [2024-07-14 10:44:53.671374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.671403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.671669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.671699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.671821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.671851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.671988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.672017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.672148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.672178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.672307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.672339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.672590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.672621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.672747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.672777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.672889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.672919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.673112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.673142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.673266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.673298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.673481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.673511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.673635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.673664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.673855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.673885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.674062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.674091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.674291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.674323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.674512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.674542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.674661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.674691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.674805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.674835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.675007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.675038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.675160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.675190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.675309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.675339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.675461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.675491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.675676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.675706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.675825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.675855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.675985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.676015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.676235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.676266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.676391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.676421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.676546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.676575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.676684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.676714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.676969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.676999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.677172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.677201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.677401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.677431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.677631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.677660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.677849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.677879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.677991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.678020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.678212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.678253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.678440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.678470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.678663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.678693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.678880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.678909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.679026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.855 [2024-07-14 10:44:53.679056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.855 qpair failed and we were unable to recover it. 00:36:08.855 [2024-07-14 10:44:53.679163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.679192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.679315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.679347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.679475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.679504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.679681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.679711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.679828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.679857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.680126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.680161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.680290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.680321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.680453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.680483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.680606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.680637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.680758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.680787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.680963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.680993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.681178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.681209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.681330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.681360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.681470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.681499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.681632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.681662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.681795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.681825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.681945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.681975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.682162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.682192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.682319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.682350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.682561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.682590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.682790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.682820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.683009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.683039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.683158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.683188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.683422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.683453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.683695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.683725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.683854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.683883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.684023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.684052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.684238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.684269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.684457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.684488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.684672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.684703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.684809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.684839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.684956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.684986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.685124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.685160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.685274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.685305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.685416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.685446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.685688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.685718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.685932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.685962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.686073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.686104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.686222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.686260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.686448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.686478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.686680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.686710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.856 [2024-07-14 10:44:53.686904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.856 [2024-07-14 10:44:53.686935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.856 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.687127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.687156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.687347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.687379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.687498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.687528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.687633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.687662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.687791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.687820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.687926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.687956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.688064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.688093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.688386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.688417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.688630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.688660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.688778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.688808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.688994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.689024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.689140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.689171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.689310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.689341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.689534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.689564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.689688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.689718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.689835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.689865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.690147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.690177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.690317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.690359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.690468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.690498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.690615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.690645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.690843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.690873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.691129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.691159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.691280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.691311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.691493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.691522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.691643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.691673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.691913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.691943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.692212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.692250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.692389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.692419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.692562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.692592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.692784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.692813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.693071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.693101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.693246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.693289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.693520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.693551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.693728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.693758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.693857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.693887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.694071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.694101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.694335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.694368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.694560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.694590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.694795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.694825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.695031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.695062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.695265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.857 [2024-07-14 10:44:53.695297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.857 qpair failed and we were unable to recover it. 00:36:08.857 [2024-07-14 10:44:53.695473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.695504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.695712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.695742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.695932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.695961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.696152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.696190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.696340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.696385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.696633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.696663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.696773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.696803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.696937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.696967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.697152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.697182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.697374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.697406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.697533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.697564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.697705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.697734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.697866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.697896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.698003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.698032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.698234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.698264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.698455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.698485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.698753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.698783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.699006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.699037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.699170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.699199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.699332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.699369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.699555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.699585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.699764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.699794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.699982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.700012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.700189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.700218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.700414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.700444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.700572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.700602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.700787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.700817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.701024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.701053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.701176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.701206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.701359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.701390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.701579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.701610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.701746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.701776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.701978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.702007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.702135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.702164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.702410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.702441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.702627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.858 [2024-07-14 10:44:53.702656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.858 qpair failed and we were unable to recover it. 00:36:08.858 [2024-07-14 10:44:53.702791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.702820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.703011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.703041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.703257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.703289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.703403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.703432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.703554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.703585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.703771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.703801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.703924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.703953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.704075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.704111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.704318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.704349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.704455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.704485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.704603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.704632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.704920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.704949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.705080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.705109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.705241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.705271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.705379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.705409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.705583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.705613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.705732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.705762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.705879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.705909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.706033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.706062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.706306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.706336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.706451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.706481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.706617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.706647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.706761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.706791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.707042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.707072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.707280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.707311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.707490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.707520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.707648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.707678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.707871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.707900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.708040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.708069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.708185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.708216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.708348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.708378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.708496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.708527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.708632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.708662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.708798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.708829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.708963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.708997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.709137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.709166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.709311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.709343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.709515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.709544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.709741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.709771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.709953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.709983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.859 [2024-07-14 10:44:53.710120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.859 [2024-07-14 10:44:53.710149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.859 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.710290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.710320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.710440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.710469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.710582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.710612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.710789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.710819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.711019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.711050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.711181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.711211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.711403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.711438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.711622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.711652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.711784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.711815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.711921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.711950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.712064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.712094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.712204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.712244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.712457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.712487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.712610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.712640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.712767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.712796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.712918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.712948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.713186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.713216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.713354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.713384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.713561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.713591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.713782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.713813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.714063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.714093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.714278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.714309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.714504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.714534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.714709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.714739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.714862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.714891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.715095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.715125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.715256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.715288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.715426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.715456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.715593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.715624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.715801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.715832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.715956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.715988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.716186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.716217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.716342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.716373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.716497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.716533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.716726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.716756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.716871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.716901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.717011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.717041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.717244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.717275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.717391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.717420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.717545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.717574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.860 [2024-07-14 10:44:53.717751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.860 [2024-07-14 10:44:53.717782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.860 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.717974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.718003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.718123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.718153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.718268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.718300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.718419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.718449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.718587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.718617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.718802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.718832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.719036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.719066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.719205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.719247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.719432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.719462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.719571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.719601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.719796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.719826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.719943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.719973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.720159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.720189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.720382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.720413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.720596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.720627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.720732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.720762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.720904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.720934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.721121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.721151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.721340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.721372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.721480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.721516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.721628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.721658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.721764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.721794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.721992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.722022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.722136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.722166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.722314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.722345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.722469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.722499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.722680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.722710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.722825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.722855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.722975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.723006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.723146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.723176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.723294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.723324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.723441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.723471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.723650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.723681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.723869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.723899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.724083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.724112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.724367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.724398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.724591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.724621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.724798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.724827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.724953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.724984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.725167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.725197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.725385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.861 [2024-07-14 10:44:53.725419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.861 qpair failed and we were unable to recover it. 00:36:08.861 [2024-07-14 10:44:53.725552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.725584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.725692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.725722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.725849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.725879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.726006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.726036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.726240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.726271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.726404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.726439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.726563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.726593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.726715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.726744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.726942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.726971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.727099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.727129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.727266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.727298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.727441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.727472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.727589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.727618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.727799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.727829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.728012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.728042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.728166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.728195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.728398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.728430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.728542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.728572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.728686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.728715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.728869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.728899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.729088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.729118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.729320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.729351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.729463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.729491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.729620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.729650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.729774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.729804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.729922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.729952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.730191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.730221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.730364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.730395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.730579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.730609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.730796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.730826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.731006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.731035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.731245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.731275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.731471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.731501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.731628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.731658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.731837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.731866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.731983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.732013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.732137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.732167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.732384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.732415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.732519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.732548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.732662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.732691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.732813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.862 [2024-07-14 10:44:53.732843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.862 qpair failed and we were unable to recover it. 00:36:08.862 [2024-07-14 10:44:53.732956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.732986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.733167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.733196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.733411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.733449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.733648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.733681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.733832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.733868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.734018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.734048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.734185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.734214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.734340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.734370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.734623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.734653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.734769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.734799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.734926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.734956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.735078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.735109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.735287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.735318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.735445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.735475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.735605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.735636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.735769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.735800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.735919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.735949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.736070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.736100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.736293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.736326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.736511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.736541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.736718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.736748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.736938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.736968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.737095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.737125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.737302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.737333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.737468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.737498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.737633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.737668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.737851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.737881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.738075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.738104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.738307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.738338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.738459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.738489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.738603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.738632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.738852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.738887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.739099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.739130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.739307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.739339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.739476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.739506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.739697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.863 [2024-07-14 10:44:53.739727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.863 qpair failed and we were unable to recover it. 00:36:08.863 [2024-07-14 10:44:53.739848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.739878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.740059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.740089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.740223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.740262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.740376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.740407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.740534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.740565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.740740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.740771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.740899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.740928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.741063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.741093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.741207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.741245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.741362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.741392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.741535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.741565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.741669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.741699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.741871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.741901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.742150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.742181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.742303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.742333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.742452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.742482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.742669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.742699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.742876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.742907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.743048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.743081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.743220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.743279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.743467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.743497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.743694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.743724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.743905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.743936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.744067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.744097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.744247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.744281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.744430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.744461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.744586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.744617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.744738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.744769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.744950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.744980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.745170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.745200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.745390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.745420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.745624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.745654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.745871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.745901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.746014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.746045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.746165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.746195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.746340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.746373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.746509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.746545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.746674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.746705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.746815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.746845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.747028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.747058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.747187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.747217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.864 qpair failed and we were unable to recover it. 00:36:08.864 [2024-07-14 10:44:53.747408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.864 [2024-07-14 10:44:53.747438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.747635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.747665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.747879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.747909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.748040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.748069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.748181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.748211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.748407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.748437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.748619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.748649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.748906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.748937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.749063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.749100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.749233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.749264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.749451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.749480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.749601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.749632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.749751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.749781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.749979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.750010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.750137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.750167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.750361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.750392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.750599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.750629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.750738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.750769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.750897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.750927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.751048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.751078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.751197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.751240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.751358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.751388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.751593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.751624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.751872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.751902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.752037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.752067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.752174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.752203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.752428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.752460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.752637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.752667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.752783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.752814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.753029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.753059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.753163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.753194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.753320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.753355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.753560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.753592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.753706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.753736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.753850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.753879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.754005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.754036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.754215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.754255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.754477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.754507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.754756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.754786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.754901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.754930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.755124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.865 [2024-07-14 10:44:53.755154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.865 qpair failed and we were unable to recover it. 00:36:08.865 [2024-07-14 10:44:53.755368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.755400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.755525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.755554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.755675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.755705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.755826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.755856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.755999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.756031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.756221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.756264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.756469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.756500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.756609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.756644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.756825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.756855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.756962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.756993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.757244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.757275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.757419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.757448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.757721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.757751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.757888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.757917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.758096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.758126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.758258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.758291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.758480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.758510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.758630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.758660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.758853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.758884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.759100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.759130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.759381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.759412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.759603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.759634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.759827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.759857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.759980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.760009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.760148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.760180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.760296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.760328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.760533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.760563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.760690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.760720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.760844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.760875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.761075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.761105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.761240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.761271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.761455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.761485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.761680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.761711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.761898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.761928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.762135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.762171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.762363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.762394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.762532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.762563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.762810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.762839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.762961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.762990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.763185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.763216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.763402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.866 [2024-07-14 10:44:53.763432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.866 qpair failed and we were unable to recover it. 00:36:08.866 [2024-07-14 10:44:53.763676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.763706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.763884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.763915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.764056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.764086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.764266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.764297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.764513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.764544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.764724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.764754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.764890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.764920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.765108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.765139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.765259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.765292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.765431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.765461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.765601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.765631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.765744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.765774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.765894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.765925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.766045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.766076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.766258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.766289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.766436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.766467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.766645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.766676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.766883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.766913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.767018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.767048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.767160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.767191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.767328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.767365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.767474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.767505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.767627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.767658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.767776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.767806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.767910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.767941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.768063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.768093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.768289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.768321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.768433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.768464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.768606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.768636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.768768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.768799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.768973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.769004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.769137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.769167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.769286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.769318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.769438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.769474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.769679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.769710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.769826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.769857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.867 qpair failed and we were unable to recover it. 00:36:08.867 [2024-07-14 10:44:53.770036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.867 [2024-07-14 10:44:53.770066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.770249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.770281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.770397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.770427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.770551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.770582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.770719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.770750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.770925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.770956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.771199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.771238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.771372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.771404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.771582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.771613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.771791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.771822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.771956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.771987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.772195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.772243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.772415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.772446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.772649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.772680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.772869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.772899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.773004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.773035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.773214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.773254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.773435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.773466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.773577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.773607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.773814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.773844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.773970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.774000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.774125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.774155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.774272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.774303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.774483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.774513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.774709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.774739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.774936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.774967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.775121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.775151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.775288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.775319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.775513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.775545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.775664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.775695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.775890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.775921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.776131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.776162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.776292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.776324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.776512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.776542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.776681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.776711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.776890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.776921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.777038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.777069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.777253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.777285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.777409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.777439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.777565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.777596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.777791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.777821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.868 [2024-07-14 10:44:53.777950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.868 [2024-07-14 10:44:53.777980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.868 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.778157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.778188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.778329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.778361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.778494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.778526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.778649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.778679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.778810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.778840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.779025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.779055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.779252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.779284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.779420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.779451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.779631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.779662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.779781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.779811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.779975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.780027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.780148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.780180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.780393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.780425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.780607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.780637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.780855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.780884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.781097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.781127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.781248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.781281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.781543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.781573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.781768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.781799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.781918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.781947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.782127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.782157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.782348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.782379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.782515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.782545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.782673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.782712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.782850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.782879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.783007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.783037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.783158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.783188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.783472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.783503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.783712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.783743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.783865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.783894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.784021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.784051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.784273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.784305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.784429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.784459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.784666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.784695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.784889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.784919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.785123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.785153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.785331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.785361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.785481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.785512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.785698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.785727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.785927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.785956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.869 qpair failed and we were unable to recover it. 00:36:08.869 [2024-07-14 10:44:53.786132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.869 [2024-07-14 10:44:53.786162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.786307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.786338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.786618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.786648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.786889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.786919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.787024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.787054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.787196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.787235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.787427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.787456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.787580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.787611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.787807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.787837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.787960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.787989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.788206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.788256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.788387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.788419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.788596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.788627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.788763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.788793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.789004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.789035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.789168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.789198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.789323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.789356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.789550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.789580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.789838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.789868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.789991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.790020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.790150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.790180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.790292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.790325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.790509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.790539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.790787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.790821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.791001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.791031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.791151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.791181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.791372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.791404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.791542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.791571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.791697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.791726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.791867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.791898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.792021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.792050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.792162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.792192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.792406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.792438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.792702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.792732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.792858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.792886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.793014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.793047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.793180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.870 [2024-07-14 10:44:53.793209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:08.870 qpair failed and we were unable to recover it. 00:36:08.870 [2024-07-14 10:44:53.793461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.140 [2024-07-14 10:44:53.793492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.140 qpair failed and we were unable to recover it. 00:36:09.140 [2024-07-14 10:44:53.793710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.140 [2024-07-14 10:44:53.793739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.140 qpair failed and we were unable to recover it. 00:36:09.140 [2024-07-14 10:44:53.793881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.140 [2024-07-14 10:44:53.793911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.794026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.794057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.794175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.794204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.794410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.794441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.794628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.794658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.794772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.794802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.794919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.794950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.795145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.795175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.795313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.795345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.795464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.795492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.795607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.795637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.795770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.795800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.796060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.796089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.796295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.796326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.796461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.796490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.796616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.796646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.796755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.796784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.796900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.796930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.797116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.797147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.797269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.797299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.797484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.797514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.797693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.797724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.797917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.797946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.798139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.798169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.798388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.798427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.798616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.798646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.798760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.798790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.799080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.799111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.799283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.799314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.799520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.799551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.799676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.799706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.799827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.799857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.799975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.800014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.800141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.800172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.800328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.800370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.800559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.800589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.141 qpair failed and we were unable to recover it. 00:36:09.141 [2024-07-14 10:44:53.800839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.141 [2024-07-14 10:44:53.800870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.801064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.801093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.801292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.801324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.801574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.801604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.801739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.801769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.801947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.801978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.802188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.802217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.802408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.802438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.802578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.802606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.802718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.802749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.802875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.802905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.803096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.803125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.803309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.803341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.803467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.803497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.803676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.803706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe74000b90 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.803833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.803873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.804058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.804089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.804353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.804388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.804507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.804538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.804721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.804752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.804938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.804968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.805155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.805186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.805462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.805495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.805634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.805665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.805791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.805821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.805943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.805973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.806217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.806258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.806374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.806405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.806605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.806635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.806762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.806796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.807051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.807081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.807206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.807248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.807369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.807399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.807541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.807572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.807818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.807849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.808026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.808056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.808247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.808279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.808404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.808435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.808556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.808587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.808763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.142 [2024-07-14 10:44:53.808793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.142 qpair failed and we were unable to recover it. 00:36:09.142 [2024-07-14 10:44:53.808921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.808951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.809145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.809176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.809383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.809420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.809595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.809626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.809750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.809781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.809899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.809929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.810125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.810155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.810284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.810316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.810505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.810535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.810805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.810836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.811039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.811071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.811314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.811344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.811470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.811500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.811620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.811650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.811771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.811802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.811995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.812025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.812303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.812335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.812522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.812553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.812692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.812723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.812905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.812934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.813131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.813161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.813282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.813313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.813509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.813539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.813693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.813723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.813913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.813944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.814082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.814112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.814290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.814321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.814434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.814464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.814601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.814631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.814806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.814841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.815018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.815047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.815294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.815325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.815436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.815465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.815644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.815674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.815928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.815957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.816134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.816165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.816300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.143 [2024-07-14 10:44:53.816332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.143 qpair failed and we were unable to recover it. 00:36:09.143 [2024-07-14 10:44:53.816558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.816589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.816701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.816732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.816842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.816873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.817077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.817108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.817314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.817346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.817587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.817618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.817746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.817777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.817892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.817923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.818192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.818222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.818436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.818467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.818629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.818660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.818850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.818880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.819054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.819084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.819201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.819240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.819418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.819447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.819584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.819614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.819909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.819940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.820072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.820102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.820242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.820273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.820380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.820409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.820530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.820560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.820684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.820713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.820849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.820879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.821059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.821089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.821279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.821311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.821561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.821592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.821789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.821820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.821949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.821978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.822156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.822186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.822312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.822343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.822521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.822550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.822678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.822708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.822910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.822940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.823129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.823159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.823276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.823306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.823449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.823479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.823604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.823634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.823935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.823965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.824087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.824116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.824251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.824282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.824467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.144 [2024-07-14 10:44:53.824498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.144 qpair failed and we were unable to recover it. 00:36:09.144 [2024-07-14 10:44:53.824686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.824716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.824895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.824924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.825104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.825133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.825245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.825276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.825406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.825436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.825548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.825578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.825707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.825736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.825853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.825884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.826079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.826108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.826286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.826315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.826446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.826476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.826596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.826625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.826822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.826851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.827039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.827068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.827251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.827282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.827409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.827439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.827631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.827660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.827776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.827805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.827944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.827974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.828099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.828134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.828265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.828295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.828539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.828569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.828751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.828781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.828891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.828922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.829118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.829148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.829262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.829292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.829476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.829506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.829698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.829728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.829841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.829870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.830050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.830079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.830260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.830291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.830406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.830436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.830553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.830582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.830785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.830816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.145 [2024-07-14 10:44:53.830998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.145 [2024-07-14 10:44:53.831028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.145 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.831159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.831189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.831320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.831349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.831473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.831502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.831672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.831703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.831893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.831923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.832053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.832082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.832200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.832240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.832423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.832454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.832704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.832734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.832922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.832953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.833063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.833093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.833223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.833270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.833398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.833427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.833542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.833571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.833751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.833781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.833899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.833929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.834051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.834080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.834204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.834245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.834371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.834401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.834517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.834547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.834666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.834696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.834816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.834845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.835026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.835056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.835184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.835213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.835402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.835433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.835578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.835608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.835802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.835832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.835947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.835976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.836083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.836112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.836273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.836304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.836550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.836581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.836792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.836822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.836942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.836971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.837090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.837119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.837251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.837281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.837554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.837584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.837712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.837742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.837870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.837900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.838189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.838236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.838365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.838394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.838516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.146 [2024-07-14 10:44:53.838546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.146 qpair failed and we were unable to recover it. 00:36:09.146 [2024-07-14 10:44:53.838666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.838695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.838890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.838919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.839026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.839056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.839272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.839303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.839446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.839475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.839663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.839692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.839818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.839849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.839959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.839989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.840172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.840201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.840348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.840379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.840552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.840582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1fb60 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.840713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.840753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.840962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.840991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.841105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.841135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.841252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.841283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.841423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.841453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.841580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.841610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.841729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.841759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.841960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.841990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.842098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.842129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.842271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.842303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.842578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.842609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.842790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.842820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.842940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.842970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.843081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.843122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.843396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.843428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.843619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.843649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.843793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.843822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.844016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.844046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.844221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.844262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:09.147 [2024-07-14 10:44:53.844405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.844435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.844556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.844586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:36:09.147 [2024-07-14 10:44:53.844775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.844804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.844936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:09.147 [2024-07-14 10:44:53.844966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.845083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.845113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:09.147 [2024-07-14 10:44:53.845302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.845333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.845450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.845482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:09.147 [2024-07-14 10:44:53.845685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.845715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.845844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.147 [2024-07-14 10:44:53.845873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.147 qpair failed and we were unable to recover it. 00:36:09.147 [2024-07-14 10:44:53.846000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.846030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.846145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.846174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.846310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.846340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.846547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.846577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.846722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.846752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.846879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.846909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.847180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.847209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.847356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.847388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.847577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.847607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.847744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.847773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.847964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.847993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.848120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.848151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.848440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.848472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.848726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.848756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.848971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.849001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.849199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.849238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.849376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.849407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.849549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.849579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.849766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.849796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.849916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.849946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.850085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.850115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.850263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.850293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.850427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.850457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.850602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.850638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.850770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.850799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.850978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.851008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.851119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.851149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.851344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.851375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.851511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.851540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.851675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.851705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.851830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.851860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.852039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.852068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.852193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.852222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.852358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.852388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.852583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.852613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.852752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.852782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.852991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.853021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.853152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.853182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.853380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.853412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.853549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.853578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.853696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.148 [2024-07-14 10:44:53.853726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-07-14 10:44:53.853850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.853880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.854017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.854047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.854182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.854212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.854356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.854387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.854515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.854545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.854659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.854690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.854910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.854941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.855065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.855095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.855239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.855270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.855411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.855442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.855569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.855598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.855813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.855842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.855955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.855985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.856121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.856154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.856338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.856370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.856595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.856624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.856823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.856854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.856994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.857025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.857234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.857265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.857401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.857431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.857614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.857643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.857773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.857803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.857990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.858020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.858213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.858251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.858497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.858527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.858640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.858671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.858911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.858941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.859090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.859121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.859241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.859271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.859405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.859435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.859580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.859634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.859773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.859805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.859999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.860029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.860163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.860194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.860337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.860376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-07-14 10:44:53.860500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.149 [2024-07-14 10:44:53.860531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.860672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.860701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.860894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.860923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.861047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.861077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.861273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.861304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.861430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.861460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.861574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.861604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.861727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.861757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.861899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.861931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.862055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.862086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.862214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.862255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.862368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.862397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.862520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.862549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.862745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.862776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.863012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.863048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.863167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.863198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.863333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.863364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.863480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.863510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.863703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.863733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.863927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.863957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.864133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.864163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.864277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.864307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.864448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.864478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.864662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.864692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.864826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.864857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.865054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.865084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.865282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.865314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.865438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.865468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.865601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.865632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.865749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.865779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.865915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.865945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.866074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.866104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.866242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.866273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.866390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.866420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.866546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.866575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.866708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.866738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.866867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.866897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.867005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.867034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.867216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.867256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.867372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.867402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.867595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.867625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-07-14 10:44:53.867758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.150 [2024-07-14 10:44:53.867788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.867983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.868012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.868143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.868174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.868298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.868328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.868456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.868486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.868685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.868715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.868834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.868864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.868975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.869005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.869192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.869222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.869372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.869403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.869519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.869548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.869727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.869756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.869899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.869929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.870048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.870083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.870207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.870268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.870389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.870419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.870530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.870560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.870695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.870726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.870902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.870932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.871125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.871154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.871348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.871381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.871506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.871536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.871691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.871721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.871830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.871859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.871978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.872007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.872190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.872221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.872428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.872458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.872583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.872613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.872715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.872745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.872998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.873029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.873250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.873281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.873417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.873447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.873573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.873603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.873725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.873754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.873933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.873963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.874084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.874114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.874290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.874321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.874447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.874477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.874609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.874639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.874758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.874788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.874921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.151 [2024-07-14 10:44:53.874951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.151 qpair failed and we were unable to recover it. 00:36:09.151 [2024-07-14 10:44:53.875182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.875212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.875379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.875409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.875521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.875550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.875670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.875700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.875825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.875856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.875989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.876019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.876140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.876170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.876369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.876399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.876536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.876566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.876698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.876729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.876849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.876879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.877012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.877042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.877164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.877198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.877345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.877376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.877555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.877585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.877765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.877794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.877930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.877960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.878069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.878099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.878215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.878254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.878412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.878443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.878581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.878611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.878723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.878753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.878865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.878894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.879032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.879063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:09.152 [2024-07-14 10:44:53.879181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.879212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.879345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.879376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.879488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:09.152 [2024-07-14 10:44:53.879519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.879657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.879686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.879808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.152 [2024-07-14 10:44:53.879839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.880029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.880059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:09.152 [2024-07-14 10:44:53.880173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.880202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.880324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.880355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.880484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.880514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.880696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.880726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.880847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.880878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.881000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.881029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.881157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.881186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.881322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.881357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.881563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.881593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.152 [2024-07-14 10:44:53.881782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.152 [2024-07-14 10:44:53.881812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.152 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.881965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.881995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.882106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.882135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.882309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.882340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.882450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.882479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.882665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.882695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.882826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.882856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.883048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.883078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.883187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.883217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.883346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.883376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.883492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.883521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.883631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.883668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.883866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.883895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.884102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.884132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.884309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.884339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.884464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.884493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.884609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.884639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.884776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.884806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.884916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.884946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.885092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.885121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.885238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.885269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.885451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.885482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.885593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.885622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.885733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.885763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.885948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.885977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.886109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.886139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.886273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.886305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.886441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.886470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.886606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.886637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.886766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.886796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.886972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.887001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.887201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.887240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.887356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.887386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.887511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.887541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.887649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.887677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.887784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.887814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.887935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.153 [2024-07-14 10:44:53.887965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.153 qpair failed and we were unable to recover it. 00:36:09.153 [2024-07-14 10:44:53.888101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.888130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.888260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.888295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.888410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.888439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.888549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.888579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.888706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.888735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.888838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.888868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.888994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.889023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.889132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.889162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.889281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.889312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.889493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.889523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.889798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.889828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.890004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.890033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.890219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.890261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.890387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.890418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.890523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.890558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.890750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.890780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.890943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.890973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.891148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.891178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.891330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.891362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.891538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.891568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.891754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.891785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.891913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.891942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.892063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.892094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.892333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.892366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.892477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.892506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.892613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.892643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.892847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.892876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.893066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.893097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.893242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.893274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.893389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.893420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.893541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.893572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.893679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.893708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.893839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.893870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.894050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.894082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.894194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.894233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.894418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.894449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.894633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.894663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.894785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.894815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.894942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.894973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.154 [2024-07-14 10:44:53.895107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.154 [2024-07-14 10:44:53.895138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.154 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.895267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.895298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.895484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.895522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.895634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.895665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.895885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.895916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.896047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.896078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.896189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.896220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.896359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.896391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.896541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.896571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.896689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.896722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.896905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.896935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.897049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.897080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.897200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.897241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.897368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.897399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.897522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.897554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.897665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.897703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.897896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.897928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.898115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.898146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.898272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.898303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.898487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.898517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.898627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.898658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.898801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.898833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.899010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.899040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.899239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.899270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.899401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.899431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.899560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.899590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.899721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.899750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.899863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.899893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.900009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.900040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.900254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.900284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.900414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.900443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 Malloc0 00:36:09.155 [2024-07-14 10:44:53.900624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.900653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.900831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.900860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.900980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.901010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.901185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.901215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.155 [2024-07-14 10:44:53.901343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.901374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.901485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.901514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.901642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.901675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:09.155 [2024-07-14 10:44:53.901785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.901815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 [2024-07-14 10:44:53.901960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.155 [2024-07-14 10:44:53.901990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.155 qpair failed and we were unable to recover it. 00:36:09.155 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.156 [2024-07-14 10:44:53.902204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.902247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:09.156 [2024-07-14 10:44:53.902374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.902405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.902532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.902561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.902670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.902699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.902804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.902834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.902947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.902977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.903158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.903188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.903337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.903368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.903491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.903521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.903646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.903675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.903796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.903826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.903959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.903988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.904196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.904236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.904480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.904510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.904768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.904797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.905009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.905039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.905150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.905179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.905332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.905364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.905482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.905512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.905635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.905666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.905861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.905891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.906015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.906044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.906236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.906268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.906520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.906549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.906658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.906688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.906823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.906854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.906973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.907002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.907194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.907241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.907351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.907380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.907506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.907537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.907652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.907682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.907804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.907833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.908011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.908041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.908091] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:09.156 [2024-07-14 10:44:53.908152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.908182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.908379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.908410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.908658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.908689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.908800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.908831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.909015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.909044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.909153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.909183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.156 [2024-07-14 10:44:53.909409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.156 [2024-07-14 10:44:53.909441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.156 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.909547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.909582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.909765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.909795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.909934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.909964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.910086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.910116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.910306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.910337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.910535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.910565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.910688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.910717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.910833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.910862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.910980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.911010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.911129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.911159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.911282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.911313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.911487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.911516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.911648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.911677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.911809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.911839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.911973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.912004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.912123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.912153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.912267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.912297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.912416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.912446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.912624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.912654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.912889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.912918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.913032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.913062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.913253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.913283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.913460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.913489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.913610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.913640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.913758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.913787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.913923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.913953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.914194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.914223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.914362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.914392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.914505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.914535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.914648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.914678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.914931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.914960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.915084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.915113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.915297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.915328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.915465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.915495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.915619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.915648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.915849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.915879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.916010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.916039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.916240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.916271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.916395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.916425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.916550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 [2024-07-14 10:44:53.916579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.157 qpair failed and we were unable to recover it. 00:36:09.157 [2024-07-14 10:44:53.916759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.157 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.158 [2024-07-14 10:44:53.916796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.916917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.916946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.917075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.917106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:09.158 [2024-07-14 10:44:53.917247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.917278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.917394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.917424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.158 [2024-07-14 10:44:53.917609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.917639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:09.158 [2024-07-14 10:44:53.917756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.917787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.917931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.917960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.918168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.918197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.918338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.918369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.918479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.918508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.918754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.918783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.918985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.919015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.919264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.919296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.919425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.919454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.919578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.919609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.919737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.919767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.919899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.919929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.920050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.920080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.920184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.920214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.920401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.920431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.920614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.920644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.920751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.920780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.920966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.920996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.921104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.921133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.921263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.921300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.921479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.921508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.921699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.921729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.921906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.921936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.922067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.922097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.922217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.922255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.922527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.922557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.922689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.922719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.922843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.922873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.158 [2024-07-14 10:44:53.922997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.158 [2024-07-14 10:44:53.923027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.158 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.923145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.923175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.923362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.923393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.923581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.923611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.923716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.923746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.923967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.923996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.924174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.924203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.924340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.924371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.924493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.924523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.924646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.924676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.924795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.159 [2024-07-14 10:44:53.924825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.924948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.924979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:09.159 [2024-07-14 10:44:53.925154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.925186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.925399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.925430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.159 [2024-07-14 10:44:53.925539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.925569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.925711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.925741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.925956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.925995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.926112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.926143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.926276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.926308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.926436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.926466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.926578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.926607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.926800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.926830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.926954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.926983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.927165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.927195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe84000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.927315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.927348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.927472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.927501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.927638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.927668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.927851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.927880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.928004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.928033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.928233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.928268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.928387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.928417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.928551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.928581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.928781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.928810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.928944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.928974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.929236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.929267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.929381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.929410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.929578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.929608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.929734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.929763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.929887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.929917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.930113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.159 [2024-07-14 10:44:53.930143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.159 qpair failed and we were unable to recover it. 00:36:09.159 [2024-07-14 10:44:53.930273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.930305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 [2024-07-14 10:44:53.930481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.930511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 [2024-07-14 10:44:53.930656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.930685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 [2024-07-14 10:44:53.930874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.930905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 [2024-07-14 10:44:53.931021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.931050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 [2024-07-14 10:44:53.931174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.931203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 [2024-07-14 10:44:53.931410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.931441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 [2024-07-14 10:44:53.931563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.931592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 [2024-07-14 10:44:53.931721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.931750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 [2024-07-14 10:44:53.931934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.931963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 [2024-07-14 10:44:53.932203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.932242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 [2024-07-14 10:44:53.932373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.932403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 [2024-07-14 10:44:53.932535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.932564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 [2024-07-14 10:44:53.932745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.932775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.160 [2024-07-14 10:44:53.932916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.932947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 [2024-07-14 10:44:53.933079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.933109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:09.160 [2024-07-14 10:44:53.933241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.933273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 [2024-07-14 10:44:53.933392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.933422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.160 [2024-07-14 10:44:53.933613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.933644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:09.160 [2024-07-14 10:44:53.933892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.933923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 [2024-07-14 10:44:53.934109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.934139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 [2024-07-14 10:44:53.934333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.934364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 [2024-07-14 10:44:53.934485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.934515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 [2024-07-14 10:44:53.934709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.934739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 [2024-07-14 10:44:53.934852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.934883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 [2024-07-14 10:44:53.935025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.935055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 [2024-07-14 10:44:53.935167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.935197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 [2024-07-14 10:44:53.935404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.935435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 [2024-07-14 10:44:53.935706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.935736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 [2024-07-14 10:44:53.935859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.935889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 [2024-07-14 10:44:53.936018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.936047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 [2024-07-14 10:44:53.936241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.160 [2024-07-14 10:44:53.936272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe7c000b90 with addr=10.0.0.2, port=4420 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 [2024-07-14 10:44:53.936308] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:09.160 [2024-07-14 10:44:53.938734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.160 [2024-07-14 10:44:53.938887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.160 [2024-07-14 10:44:53.938933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.160 [2024-07-14 10:44:53.938956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.160 [2024-07-14 10:44:53.938977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.160 [2024-07-14 10:44:53.939026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.160 qpair failed and we were unable to recover it. 00:36:09.160 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.160 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:09.160 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.160 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:09.160 [2024-07-14 10:44:53.948669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.160 [2024-07-14 10:44:53.948778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.160 [2024-07-14 10:44:53.948809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.160 [2024-07-14 10:44:53.948824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.160 [2024-07-14 10:44:53.948837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.160 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.161 [2024-07-14 10:44:53.948866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.161 qpair failed and we were unable to recover it. 00:36:09.161 10:44:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2629386 00:36:09.161 [2024-07-14 10:44:53.958623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.161 [2024-07-14 10:44:53.958691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.161 [2024-07-14 10:44:53.958711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.161 [2024-07-14 10:44:53.958721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.161 [2024-07-14 10:44:53.958729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.161 [2024-07-14 10:44:53.958749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.161 qpair failed and we were unable to recover it. 00:36:09.161 [2024-07-14 10:44:53.968605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.161 [2024-07-14 10:44:53.968667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.161 [2024-07-14 10:44:53.968681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.161 [2024-07-14 10:44:53.968687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.161 [2024-07-14 10:44:53.968693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.161 [2024-07-14 10:44:53.968707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.161 qpair failed and we were unable to recover it. 00:36:09.161 [2024-07-14 10:44:53.978576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.161 [2024-07-14 10:44:53.978634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.161 [2024-07-14 10:44:53.978648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.161 [2024-07-14 10:44:53.978655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.161 [2024-07-14 10:44:53.978660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.161 [2024-07-14 10:44:53.978675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.161 qpair failed and we were unable to recover it. 00:36:09.161 [2024-07-14 10:44:53.988652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.161 [2024-07-14 10:44:53.988705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.161 [2024-07-14 10:44:53.988719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.161 [2024-07-14 10:44:53.988726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.161 [2024-07-14 10:44:53.988731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.161 [2024-07-14 10:44:53.988746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.161 qpair failed and we were unable to recover it. 00:36:09.161 [2024-07-14 10:44:53.998604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.161 [2024-07-14 10:44:53.998666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.161 [2024-07-14 10:44:53.998681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.161 [2024-07-14 10:44:53.998691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.161 [2024-07-14 10:44:53.998697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.161 [2024-07-14 10:44:53.998711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.161 qpair failed and we were unable to recover it. 00:36:09.161 [2024-07-14 10:44:54.008701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.161 [2024-07-14 10:44:54.008762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.161 [2024-07-14 10:44:54.008777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.161 [2024-07-14 10:44:54.008783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.161 [2024-07-14 10:44:54.008789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.161 [2024-07-14 10:44:54.008804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.161 qpair failed and we were unable to recover it. 00:36:09.161 [2024-07-14 10:44:54.018661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.161 [2024-07-14 10:44:54.018721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.161 [2024-07-14 10:44:54.018736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.161 [2024-07-14 10:44:54.018743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.161 [2024-07-14 10:44:54.018749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.161 [2024-07-14 10:44:54.018763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.161 qpair failed and we were unable to recover it. 00:36:09.161 [2024-07-14 10:44:54.028759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.161 [2024-07-14 10:44:54.028857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.161 [2024-07-14 10:44:54.028873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.161 [2024-07-14 10:44:54.028880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.161 [2024-07-14 10:44:54.028886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.161 [2024-07-14 10:44:54.028900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.161 qpair failed and we were unable to recover it. 00:36:09.161 [2024-07-14 10:44:54.038754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.161 [2024-07-14 10:44:54.038811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.161 [2024-07-14 10:44:54.038826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.161 [2024-07-14 10:44:54.038833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.161 [2024-07-14 10:44:54.038838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.161 [2024-07-14 10:44:54.038852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.161 qpair failed and we were unable to recover it. 00:36:09.161 [2024-07-14 10:44:54.048790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.161 [2024-07-14 10:44:54.048887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.161 [2024-07-14 10:44:54.048902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.161 [2024-07-14 10:44:54.048908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.161 [2024-07-14 10:44:54.048914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.161 [2024-07-14 10:44:54.048929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.161 qpair failed and we were unable to recover it. 00:36:09.161 [2024-07-14 10:44:54.058831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.161 [2024-07-14 10:44:54.058909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.161 [2024-07-14 10:44:54.058923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.161 [2024-07-14 10:44:54.058929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.161 [2024-07-14 10:44:54.058935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.161 [2024-07-14 10:44:54.058949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.161 qpair failed and we were unable to recover it. 00:36:09.161 [2024-07-14 10:44:54.068811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.161 [2024-07-14 10:44:54.068889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.161 [2024-07-14 10:44:54.068903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.161 [2024-07-14 10:44:54.068909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.161 [2024-07-14 10:44:54.068915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.161 [2024-07-14 10:44:54.068929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.161 qpair failed and we were unable to recover it. 00:36:09.161 [2024-07-14 10:44:54.078872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.161 [2024-07-14 10:44:54.078925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.161 [2024-07-14 10:44:54.078939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.161 [2024-07-14 10:44:54.078945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.161 [2024-07-14 10:44:54.078951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.161 [2024-07-14 10:44:54.078965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.161 qpair failed and we were unable to recover it. 00:36:09.161 [2024-07-14 10:44:54.088931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.161 [2024-07-14 10:44:54.089005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.161 [2024-07-14 10:44:54.089022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.161 [2024-07-14 10:44:54.089028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.161 [2024-07-14 10:44:54.089034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.161 [2024-07-14 10:44:54.089048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.161 qpair failed and we were unable to recover it. 00:36:09.161 [2024-07-14 10:44:54.098945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.162 [2024-07-14 10:44:54.099002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.162 [2024-07-14 10:44:54.099016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.162 [2024-07-14 10:44:54.099023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.162 [2024-07-14 10:44:54.099028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.162 [2024-07-14 10:44:54.099042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.162 qpair failed and we were unable to recover it. 00:36:09.420 [2024-07-14 10:44:54.108990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.420 [2024-07-14 10:44:54.109048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.420 [2024-07-14 10:44:54.109063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.420 [2024-07-14 10:44:54.109070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.420 [2024-07-14 10:44:54.109076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.420 [2024-07-14 10:44:54.109089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.420 qpair failed and we were unable to recover it. 00:36:09.420 [2024-07-14 10:44:54.118999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.420 [2024-07-14 10:44:54.119053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.420 [2024-07-14 10:44:54.119067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.420 [2024-07-14 10:44:54.119074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.420 [2024-07-14 10:44:54.119079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.420 [2024-07-14 10:44:54.119093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.420 qpair failed and we were unable to recover it. 00:36:09.420 [2024-07-14 10:44:54.129036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.420 [2024-07-14 10:44:54.129113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.420 [2024-07-14 10:44:54.129128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.420 [2024-07-14 10:44:54.129135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.420 [2024-07-14 10:44:54.129140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.420 [2024-07-14 10:44:54.129154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.420 qpair failed and we were unable to recover it. 00:36:09.420 [2024-07-14 10:44:54.139195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.420 [2024-07-14 10:44:54.139263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.420 [2024-07-14 10:44:54.139277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.420 [2024-07-14 10:44:54.139284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.420 [2024-07-14 10:44:54.139290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.420 [2024-07-14 10:44:54.139304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.420 qpair failed and we were unable to recover it. 00:36:09.420 [2024-07-14 10:44:54.149172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.420 [2024-07-14 10:44:54.149238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.420 [2024-07-14 10:44:54.149253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.420 [2024-07-14 10:44:54.149259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.420 [2024-07-14 10:44:54.149265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.420 [2024-07-14 10:44:54.149279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.420 qpair failed and we were unable to recover it. 00:36:09.420 [2024-07-14 10:44:54.159142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.420 [2024-07-14 10:44:54.159198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.420 [2024-07-14 10:44:54.159212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.420 [2024-07-14 10:44:54.159219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.420 [2024-07-14 10:44:54.159228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.421 [2024-07-14 10:44:54.159242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.421 qpair failed and we were unable to recover it. 00:36:09.421 [2024-07-14 10:44:54.169112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.421 [2024-07-14 10:44:54.169171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.421 [2024-07-14 10:44:54.169185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.421 [2024-07-14 10:44:54.169192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.421 [2024-07-14 10:44:54.169197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.421 [2024-07-14 10:44:54.169211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.421 qpair failed and we were unable to recover it. 00:36:09.421 [2024-07-14 10:44:54.179177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.421 [2024-07-14 10:44:54.179250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.421 [2024-07-14 10:44:54.179267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.421 [2024-07-14 10:44:54.179273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.421 [2024-07-14 10:44:54.179279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.421 [2024-07-14 10:44:54.179293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.421 qpair failed and we were unable to recover it. 00:36:09.421 [2024-07-14 10:44:54.189207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.421 [2024-07-14 10:44:54.189271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.421 [2024-07-14 10:44:54.189285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.421 [2024-07-14 10:44:54.189291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.421 [2024-07-14 10:44:54.189297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.421 [2024-07-14 10:44:54.189310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.421 qpair failed and we were unable to recover it. 00:36:09.421 [2024-07-14 10:44:54.199248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.421 [2024-07-14 10:44:54.199297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.421 [2024-07-14 10:44:54.199311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.421 [2024-07-14 10:44:54.199317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.421 [2024-07-14 10:44:54.199322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.421 [2024-07-14 10:44:54.199336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.421 qpair failed and we were unable to recover it. 00:36:09.421 [2024-07-14 10:44:54.209255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.421 [2024-07-14 10:44:54.209311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.421 [2024-07-14 10:44:54.209325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.421 [2024-07-14 10:44:54.209332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.421 [2024-07-14 10:44:54.209337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.421 [2024-07-14 10:44:54.209351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.421 qpair failed and we were unable to recover it. 00:36:09.421 [2024-07-14 10:44:54.219295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.421 [2024-07-14 10:44:54.219356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.421 [2024-07-14 10:44:54.219370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.421 [2024-07-14 10:44:54.219376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.421 [2024-07-14 10:44:54.219382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.421 [2024-07-14 10:44:54.219398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.421 qpair failed and we were unable to recover it. 00:36:09.421 [2024-07-14 10:44:54.229340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.421 [2024-07-14 10:44:54.229397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.421 [2024-07-14 10:44:54.229411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.421 [2024-07-14 10:44:54.229417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.421 [2024-07-14 10:44:54.229423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.421 [2024-07-14 10:44:54.229437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.421 qpair failed and we were unable to recover it. 00:36:09.421 [2024-07-14 10:44:54.239369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.421 [2024-07-14 10:44:54.239423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.421 [2024-07-14 10:44:54.239436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.421 [2024-07-14 10:44:54.239442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.421 [2024-07-14 10:44:54.239448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.421 [2024-07-14 10:44:54.239462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.421 qpair failed and we were unable to recover it. 00:36:09.421 [2024-07-14 10:44:54.249372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.421 [2024-07-14 10:44:54.249432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.421 [2024-07-14 10:44:54.249447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.421 [2024-07-14 10:44:54.249453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.421 [2024-07-14 10:44:54.249459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.421 [2024-07-14 10:44:54.249472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.421 qpair failed and we were unable to recover it. 00:36:09.421 [2024-07-14 10:44:54.259397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.421 [2024-07-14 10:44:54.259464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.421 [2024-07-14 10:44:54.259479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.421 [2024-07-14 10:44:54.259486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.421 [2024-07-14 10:44:54.259492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.421 [2024-07-14 10:44:54.259507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.421 qpair failed and we were unable to recover it. 00:36:09.421 [2024-07-14 10:44:54.269465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.421 [2024-07-14 10:44:54.269520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.421 [2024-07-14 10:44:54.269538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.421 [2024-07-14 10:44:54.269544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.421 [2024-07-14 10:44:54.269550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.421 [2024-07-14 10:44:54.269564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.421 qpair failed and we were unable to recover it. 00:36:09.421 [2024-07-14 10:44:54.279459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.421 [2024-07-14 10:44:54.279509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.421 [2024-07-14 10:44:54.279523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.421 [2024-07-14 10:44:54.279529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.421 [2024-07-14 10:44:54.279535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.421 [2024-07-14 10:44:54.279548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.421 qpair failed and we were unable to recover it. 00:36:09.421 [2024-07-14 10:44:54.289492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.421 [2024-07-14 10:44:54.289546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.421 [2024-07-14 10:44:54.289560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.421 [2024-07-14 10:44:54.289566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.421 [2024-07-14 10:44:54.289572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.421 [2024-07-14 10:44:54.289585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.421 qpair failed and we were unable to recover it. 00:36:09.421 [2024-07-14 10:44:54.299516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.421 [2024-07-14 10:44:54.299587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.421 [2024-07-14 10:44:54.299601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.421 [2024-07-14 10:44:54.299607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.421 [2024-07-14 10:44:54.299612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.421 [2024-07-14 10:44:54.299626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.421 qpair failed and we were unable to recover it. 00:36:09.421 [2024-07-14 10:44:54.309481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.422 [2024-07-14 10:44:54.309537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.422 [2024-07-14 10:44:54.309552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.422 [2024-07-14 10:44:54.309560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.422 [2024-07-14 10:44:54.309570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.422 [2024-07-14 10:44:54.309587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.422 qpair failed and we were unable to recover it. 00:36:09.422 [2024-07-14 10:44:54.319575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.422 [2024-07-14 10:44:54.319625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.422 [2024-07-14 10:44:54.319639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.422 [2024-07-14 10:44:54.319645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.422 [2024-07-14 10:44:54.319651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.422 [2024-07-14 10:44:54.319665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.422 qpair failed and we were unable to recover it. 00:36:09.422 [2024-07-14 10:44:54.329649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.422 [2024-07-14 10:44:54.329730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.422 [2024-07-14 10:44:54.329744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.422 [2024-07-14 10:44:54.329751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.422 [2024-07-14 10:44:54.329756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.422 [2024-07-14 10:44:54.329770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.422 qpair failed and we were unable to recover it. 00:36:09.422 [2024-07-14 10:44:54.339564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.422 [2024-07-14 10:44:54.339625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.422 [2024-07-14 10:44:54.339639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.422 [2024-07-14 10:44:54.339646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.422 [2024-07-14 10:44:54.339652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.422 [2024-07-14 10:44:54.339666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.422 qpair failed and we were unable to recover it. 00:36:09.422 [2024-07-14 10:44:54.349688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.422 [2024-07-14 10:44:54.349752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.422 [2024-07-14 10:44:54.349766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.422 [2024-07-14 10:44:54.349772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.422 [2024-07-14 10:44:54.349778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.422 [2024-07-14 10:44:54.349792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.422 qpair failed and we were unable to recover it. 00:36:09.422 [2024-07-14 10:44:54.359698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.422 [2024-07-14 10:44:54.359756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.422 [2024-07-14 10:44:54.359770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.422 [2024-07-14 10:44:54.359776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.422 [2024-07-14 10:44:54.359782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.422 [2024-07-14 10:44:54.359795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.422 qpair failed and we were unable to recover it. 00:36:09.422 [2024-07-14 10:44:54.369731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.422 [2024-07-14 10:44:54.369788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.422 [2024-07-14 10:44:54.369803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.422 [2024-07-14 10:44:54.369809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.422 [2024-07-14 10:44:54.369815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.422 [2024-07-14 10:44:54.369829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.422 qpair failed and we were unable to recover it. 00:36:09.422 [2024-07-14 10:44:54.379747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.422 [2024-07-14 10:44:54.379798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.422 [2024-07-14 10:44:54.379812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.422 [2024-07-14 10:44:54.379818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.422 [2024-07-14 10:44:54.379824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.422 [2024-07-14 10:44:54.379838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.422 qpair failed and we were unable to recover it. 00:36:09.422 [2024-07-14 10:44:54.389766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.422 [2024-07-14 10:44:54.389838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.422 [2024-07-14 10:44:54.389852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.422 [2024-07-14 10:44:54.389858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.422 [2024-07-14 10:44:54.389864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.422 [2024-07-14 10:44:54.389877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.422 qpair failed and we were unable to recover it. 00:36:09.681 [2024-07-14 10:44:54.399809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.681 [2024-07-14 10:44:54.399864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.681 [2024-07-14 10:44:54.399878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.681 [2024-07-14 10:44:54.399888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.681 [2024-07-14 10:44:54.399894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.681 [2024-07-14 10:44:54.399908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.681 qpair failed and we were unable to recover it. 00:36:09.681 [2024-07-14 10:44:54.409766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.681 [2024-07-14 10:44:54.409822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.681 [2024-07-14 10:44:54.409837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.681 [2024-07-14 10:44:54.409844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.681 [2024-07-14 10:44:54.409849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.681 [2024-07-14 10:44:54.409863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.681 qpair failed and we were unable to recover it. 00:36:09.681 [2024-07-14 10:44:54.419850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.681 [2024-07-14 10:44:54.419905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.681 [2024-07-14 10:44:54.419919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.681 [2024-07-14 10:44:54.419925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.681 [2024-07-14 10:44:54.419931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.681 [2024-07-14 10:44:54.419945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.681 qpair failed and we were unable to recover it. 00:36:09.681 [2024-07-14 10:44:54.429886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.682 [2024-07-14 10:44:54.429939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.682 [2024-07-14 10:44:54.429953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.682 [2024-07-14 10:44:54.429959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.682 [2024-07-14 10:44:54.429965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.682 [2024-07-14 10:44:54.429978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.682 qpair failed and we were unable to recover it. 00:36:09.682 [2024-07-14 10:44:54.439916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.682 [2024-07-14 10:44:54.439994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.682 [2024-07-14 10:44:54.440008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.682 [2024-07-14 10:44:54.440014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.682 [2024-07-14 10:44:54.440020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.682 [2024-07-14 10:44:54.440034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.682 qpair failed and we were unable to recover it. 00:36:09.682 [2024-07-14 10:44:54.449953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.682 [2024-07-14 10:44:54.450006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.682 [2024-07-14 10:44:54.450019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.682 [2024-07-14 10:44:54.450025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.682 [2024-07-14 10:44:54.450032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.682 [2024-07-14 10:44:54.450045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.682 qpair failed and we were unable to recover it. 00:36:09.682 [2024-07-14 10:44:54.459971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.682 [2024-07-14 10:44:54.460024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.682 [2024-07-14 10:44:54.460038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.682 [2024-07-14 10:44:54.460044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.682 [2024-07-14 10:44:54.460050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.682 [2024-07-14 10:44:54.460063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.682 qpair failed and we were unable to recover it. 00:36:09.682 [2024-07-14 10:44:54.470050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.682 [2024-07-14 10:44:54.470114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.682 [2024-07-14 10:44:54.470128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.682 [2024-07-14 10:44:54.470134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.682 [2024-07-14 10:44:54.470140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.682 [2024-07-14 10:44:54.470153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.682 qpair failed and we were unable to recover it. 00:36:09.682 [2024-07-14 10:44:54.480015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.682 [2024-07-14 10:44:54.480071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.682 [2024-07-14 10:44:54.480085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.682 [2024-07-14 10:44:54.480092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.682 [2024-07-14 10:44:54.480097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.682 [2024-07-14 10:44:54.480111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.682 qpair failed and we were unable to recover it. 00:36:09.682 [2024-07-14 10:44:54.490058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.682 [2024-07-14 10:44:54.490114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.682 [2024-07-14 10:44:54.490128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.682 [2024-07-14 10:44:54.490138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.682 [2024-07-14 10:44:54.490143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.682 [2024-07-14 10:44:54.490157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.682 qpair failed and we were unable to recover it. 00:36:09.682 [2024-07-14 10:44:54.500073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.682 [2024-07-14 10:44:54.500127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.682 [2024-07-14 10:44:54.500142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.682 [2024-07-14 10:44:54.500148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.682 [2024-07-14 10:44:54.500154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.682 [2024-07-14 10:44:54.500168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.682 qpair failed and we were unable to recover it. 00:36:09.682 [2024-07-14 10:44:54.510101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.682 [2024-07-14 10:44:54.510156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.682 [2024-07-14 10:44:54.510169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.682 [2024-07-14 10:44:54.510176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.682 [2024-07-14 10:44:54.510182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.682 [2024-07-14 10:44:54.510196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.682 qpair failed and we were unable to recover it. 00:36:09.682 [2024-07-14 10:44:54.520139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.682 [2024-07-14 10:44:54.520192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.682 [2024-07-14 10:44:54.520206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.682 [2024-07-14 10:44:54.520212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.682 [2024-07-14 10:44:54.520218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.682 [2024-07-14 10:44:54.520235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.682 qpair failed and we were unable to recover it. 00:36:09.682 [2024-07-14 10:44:54.530180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.682 [2024-07-14 10:44:54.530244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.682 [2024-07-14 10:44:54.530258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.682 [2024-07-14 10:44:54.530264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.682 [2024-07-14 10:44:54.530270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.682 [2024-07-14 10:44:54.530283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.682 qpair failed and we were unable to recover it. 00:36:09.682 [2024-07-14 10:44:54.540277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.682 [2024-07-14 10:44:54.540359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.682 [2024-07-14 10:44:54.540372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.682 [2024-07-14 10:44:54.540379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.682 [2024-07-14 10:44:54.540384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.682 [2024-07-14 10:44:54.540398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.682 qpair failed and we were unable to recover it. 00:36:09.682 [2024-07-14 10:44:54.550238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.682 [2024-07-14 10:44:54.550293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.682 [2024-07-14 10:44:54.550307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.682 [2024-07-14 10:44:54.550313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.682 [2024-07-14 10:44:54.550319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.682 [2024-07-14 10:44:54.550333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.682 qpair failed and we were unable to recover it. 00:36:09.682 [2024-07-14 10:44:54.560286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.682 [2024-07-14 10:44:54.560340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.682 [2024-07-14 10:44:54.560354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.682 [2024-07-14 10:44:54.560360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.682 [2024-07-14 10:44:54.560366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.682 [2024-07-14 10:44:54.560379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.682 qpair failed and we were unable to recover it. 00:36:09.682 [2024-07-14 10:44:54.570292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.682 [2024-07-14 10:44:54.570349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.682 [2024-07-14 10:44:54.570363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.683 [2024-07-14 10:44:54.570370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.683 [2024-07-14 10:44:54.570375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.683 [2024-07-14 10:44:54.570389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.683 qpair failed and we were unable to recover it. 00:36:09.683 [2024-07-14 10:44:54.580323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.683 [2024-07-14 10:44:54.580380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.683 [2024-07-14 10:44:54.580398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.683 [2024-07-14 10:44:54.580404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.683 [2024-07-14 10:44:54.580410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.683 [2024-07-14 10:44:54.580424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.683 qpair failed and we were unable to recover it. 00:36:09.683 [2024-07-14 10:44:54.590351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.683 [2024-07-14 10:44:54.590405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.683 [2024-07-14 10:44:54.590420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.683 [2024-07-14 10:44:54.590426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.683 [2024-07-14 10:44:54.590432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.683 [2024-07-14 10:44:54.590445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.683 qpair failed and we were unable to recover it. 00:36:09.683 [2024-07-14 10:44:54.600403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.683 [2024-07-14 10:44:54.600456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.683 [2024-07-14 10:44:54.600470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.683 [2024-07-14 10:44:54.600476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.683 [2024-07-14 10:44:54.600482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.683 [2024-07-14 10:44:54.600496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.683 qpair failed and we were unable to recover it. 00:36:09.683 [2024-07-14 10:44:54.610434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.683 [2024-07-14 10:44:54.610508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.683 [2024-07-14 10:44:54.610523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.683 [2024-07-14 10:44:54.610530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.683 [2024-07-14 10:44:54.610535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.683 [2024-07-14 10:44:54.610550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.683 qpair failed and we were unable to recover it. 00:36:09.683 [2024-07-14 10:44:54.620440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.683 [2024-07-14 10:44:54.620494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.683 [2024-07-14 10:44:54.620508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.683 [2024-07-14 10:44:54.620515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.683 [2024-07-14 10:44:54.620520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.683 [2024-07-14 10:44:54.620541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.683 qpair failed and we were unable to recover it. 00:36:09.683 [2024-07-14 10:44:54.630465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.683 [2024-07-14 10:44:54.630517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.683 [2024-07-14 10:44:54.630532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.683 [2024-07-14 10:44:54.630538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.683 [2024-07-14 10:44:54.630544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.683 [2024-07-14 10:44:54.630558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.683 qpair failed and we were unable to recover it. 00:36:09.683 [2024-07-14 10:44:54.640537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.683 [2024-07-14 10:44:54.640595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.683 [2024-07-14 10:44:54.640608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.683 [2024-07-14 10:44:54.640615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.683 [2024-07-14 10:44:54.640621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.683 [2024-07-14 10:44:54.640635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.683 qpair failed and we were unable to recover it. 00:36:09.683 [2024-07-14 10:44:54.650524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.683 [2024-07-14 10:44:54.650581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.683 [2024-07-14 10:44:54.650595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.683 [2024-07-14 10:44:54.650601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.683 [2024-07-14 10:44:54.650607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.683 [2024-07-14 10:44:54.650621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.683 qpair failed and we were unable to recover it. 00:36:09.943 [2024-07-14 10:44:54.660508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.943 [2024-07-14 10:44:54.660562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.943 [2024-07-14 10:44:54.660576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.943 [2024-07-14 10:44:54.660583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.943 [2024-07-14 10:44:54.660589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.943 [2024-07-14 10:44:54.660603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.943 qpair failed and we were unable to recover it. 00:36:09.943 [2024-07-14 10:44:54.670535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.943 [2024-07-14 10:44:54.670590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.943 [2024-07-14 10:44:54.670608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.943 [2024-07-14 10:44:54.670615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.943 [2024-07-14 10:44:54.670621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.943 [2024-07-14 10:44:54.670634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.943 qpair failed and we were unable to recover it. 00:36:09.943 [2024-07-14 10:44:54.680585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.943 [2024-07-14 10:44:54.680641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.943 [2024-07-14 10:44:54.680655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.943 [2024-07-14 10:44:54.680662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.943 [2024-07-14 10:44:54.680668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.943 [2024-07-14 10:44:54.680681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.943 qpair failed and we were unable to recover it. 00:36:09.943 [2024-07-14 10:44:54.690630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.943 [2024-07-14 10:44:54.690681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.943 [2024-07-14 10:44:54.690695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.943 [2024-07-14 10:44:54.690701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.943 [2024-07-14 10:44:54.690707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.943 [2024-07-14 10:44:54.690721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.943 qpair failed and we were unable to recover it. 00:36:09.943 [2024-07-14 10:44:54.700693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.943 [2024-07-14 10:44:54.700751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.943 [2024-07-14 10:44:54.700766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.943 [2024-07-14 10:44:54.700772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.943 [2024-07-14 10:44:54.700778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.943 [2024-07-14 10:44:54.700792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.943 qpair failed and we were unable to recover it. 00:36:09.943 [2024-07-14 10:44:54.710713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.943 [2024-07-14 10:44:54.710779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.943 [2024-07-14 10:44:54.710793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.943 [2024-07-14 10:44:54.710799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.943 [2024-07-14 10:44:54.710808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.943 [2024-07-14 10:44:54.710821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.943 qpair failed and we were unable to recover it. 00:36:09.943 [2024-07-14 10:44:54.720673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.943 [2024-07-14 10:44:54.720728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.943 [2024-07-14 10:44:54.720742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.943 [2024-07-14 10:44:54.720748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.943 [2024-07-14 10:44:54.720754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.943 [2024-07-14 10:44:54.720768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.943 qpair failed and we were unable to recover it. 00:36:09.943 [2024-07-14 10:44:54.730791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.943 [2024-07-14 10:44:54.730876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.943 [2024-07-14 10:44:54.730890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.943 [2024-07-14 10:44:54.730896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.943 [2024-07-14 10:44:54.730902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.943 [2024-07-14 10:44:54.730916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.943 qpair failed and we were unable to recover it. 00:36:09.943 [2024-07-14 10:44:54.740719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.943 [2024-07-14 10:44:54.740778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.943 [2024-07-14 10:44:54.740792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.943 [2024-07-14 10:44:54.740799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.943 [2024-07-14 10:44:54.740804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.943 [2024-07-14 10:44:54.740818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.943 qpair failed and we were unable to recover it. 00:36:09.943 [2024-07-14 10:44:54.750810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.943 [2024-07-14 10:44:54.750864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.943 [2024-07-14 10:44:54.750878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.943 [2024-07-14 10:44:54.750884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.943 [2024-07-14 10:44:54.750890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.943 [2024-07-14 10:44:54.750904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.943 qpair failed and we were unable to recover it. 00:36:09.943 [2024-07-14 10:44:54.760848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.943 [2024-07-14 10:44:54.760907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.943 [2024-07-14 10:44:54.760921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.943 [2024-07-14 10:44:54.760928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.943 [2024-07-14 10:44:54.760933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.943 [2024-07-14 10:44:54.760947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.943 qpair failed and we were unable to recover it. 00:36:09.943 [2024-07-14 10:44:54.770899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.943 [2024-07-14 10:44:54.770982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.943 [2024-07-14 10:44:54.770996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.943 [2024-07-14 10:44:54.771002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.943 [2024-07-14 10:44:54.771008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.943 [2024-07-14 10:44:54.771021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.943 qpair failed and we were unable to recover it. 00:36:09.943 [2024-07-14 10:44:54.780948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.943 [2024-07-14 10:44:54.781007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.943 [2024-07-14 10:44:54.781021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.943 [2024-07-14 10:44:54.781028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.943 [2024-07-14 10:44:54.781033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.943 [2024-07-14 10:44:54.781047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.943 qpair failed and we were unable to recover it. 00:36:09.943 [2024-07-14 10:44:54.790922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.943 [2024-07-14 10:44:54.790976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.943 [2024-07-14 10:44:54.790989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.943 [2024-07-14 10:44:54.790996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.943 [2024-07-14 10:44:54.791002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.943 [2024-07-14 10:44:54.791016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.943 qpair failed and we were unable to recover it. 00:36:09.944 [2024-07-14 10:44:54.801004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.944 [2024-07-14 10:44:54.801065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.944 [2024-07-14 10:44:54.801078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.944 [2024-07-14 10:44:54.801085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.944 [2024-07-14 10:44:54.801093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.944 [2024-07-14 10:44:54.801106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.944 qpair failed and we were unable to recover it. 00:36:09.944 [2024-07-14 10:44:54.810983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.944 [2024-07-14 10:44:54.811058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.944 [2024-07-14 10:44:54.811073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.944 [2024-07-14 10:44:54.811080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.944 [2024-07-14 10:44:54.811085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.944 [2024-07-14 10:44:54.811099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.944 qpair failed and we were unable to recover it. 00:36:09.944 [2024-07-14 10:44:54.821007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.944 [2024-07-14 10:44:54.821061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.944 [2024-07-14 10:44:54.821075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.944 [2024-07-14 10:44:54.821081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.944 [2024-07-14 10:44:54.821087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.944 [2024-07-14 10:44:54.821101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.944 qpair failed and we were unable to recover it. 00:36:09.944 [2024-07-14 10:44:54.831099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.944 [2024-07-14 10:44:54.831156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.944 [2024-07-14 10:44:54.831170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.944 [2024-07-14 10:44:54.831177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.944 [2024-07-14 10:44:54.831182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.944 [2024-07-14 10:44:54.831195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.944 qpair failed and we were unable to recover it. 00:36:09.944 [2024-07-14 10:44:54.841084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.944 [2024-07-14 10:44:54.841140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.944 [2024-07-14 10:44:54.841154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.944 [2024-07-14 10:44:54.841160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.944 [2024-07-14 10:44:54.841165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.944 [2024-07-14 10:44:54.841179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.944 qpair failed and we were unable to recover it. 00:36:09.944 [2024-07-14 10:44:54.851110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.944 [2024-07-14 10:44:54.851166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.944 [2024-07-14 10:44:54.851180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.944 [2024-07-14 10:44:54.851186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.944 [2024-07-14 10:44:54.851192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.944 [2024-07-14 10:44:54.851206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.944 qpair failed and we were unable to recover it. 00:36:09.944 [2024-07-14 10:44:54.861169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.944 [2024-07-14 10:44:54.861227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.944 [2024-07-14 10:44:54.861241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.944 [2024-07-14 10:44:54.861247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.944 [2024-07-14 10:44:54.861253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.944 [2024-07-14 10:44:54.861267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.944 qpair failed and we were unable to recover it. 00:36:09.944 [2024-07-14 10:44:54.871175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.944 [2024-07-14 10:44:54.871233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.944 [2024-07-14 10:44:54.871247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.944 [2024-07-14 10:44:54.871254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.944 [2024-07-14 10:44:54.871259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.944 [2024-07-14 10:44:54.871273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.944 qpair failed and we were unable to recover it. 00:36:09.944 [2024-07-14 10:44:54.881179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.944 [2024-07-14 10:44:54.881256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.944 [2024-07-14 10:44:54.881270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.944 [2024-07-14 10:44:54.881276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.944 [2024-07-14 10:44:54.881282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.944 [2024-07-14 10:44:54.881295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.944 qpair failed and we were unable to recover it. 00:36:09.944 [2024-07-14 10:44:54.891220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.944 [2024-07-14 10:44:54.891275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.944 [2024-07-14 10:44:54.891289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.944 [2024-07-14 10:44:54.891299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.944 [2024-07-14 10:44:54.891304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.944 [2024-07-14 10:44:54.891318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.944 qpair failed and we were unable to recover it. 00:36:09.944 [2024-07-14 10:44:54.901242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.944 [2024-07-14 10:44:54.901300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.944 [2024-07-14 10:44:54.901314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.944 [2024-07-14 10:44:54.901320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.944 [2024-07-14 10:44:54.901326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.944 [2024-07-14 10:44:54.901340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.944 qpair failed and we were unable to recover it. 00:36:09.944 [2024-07-14 10:44:54.911277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.944 [2024-07-14 10:44:54.911331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.944 [2024-07-14 10:44:54.911345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.944 [2024-07-14 10:44:54.911352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.944 [2024-07-14 10:44:54.911357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:09.944 [2024-07-14 10:44:54.911371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.944 qpair failed and we were unable to recover it. 00:36:09.944 [2024-07-14 10:44:54.921322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.204 [2024-07-14 10:44:54.921394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.204 [2024-07-14 10:44:54.921410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.204 [2024-07-14 10:44:54.921417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.204 [2024-07-14 10:44:54.921423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.204 [2024-07-14 10:44:54.921438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.204 qpair failed and we were unable to recover it. 00:36:10.204 [2024-07-14 10:44:54.931339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.204 [2024-07-14 10:44:54.931397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.204 [2024-07-14 10:44:54.931412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.204 [2024-07-14 10:44:54.931418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.204 [2024-07-14 10:44:54.931424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.204 [2024-07-14 10:44:54.931438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.204 qpair failed and we were unable to recover it. 00:36:10.204 [2024-07-14 10:44:54.941354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.204 [2024-07-14 10:44:54.941414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.204 [2024-07-14 10:44:54.941428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.204 [2024-07-14 10:44:54.941435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.204 [2024-07-14 10:44:54.941440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.204 [2024-07-14 10:44:54.941454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.204 qpair failed and we were unable to recover it. 00:36:10.204 [2024-07-14 10:44:54.951427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.204 [2024-07-14 10:44:54.951487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.204 [2024-07-14 10:44:54.951501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.204 [2024-07-14 10:44:54.951507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.204 [2024-07-14 10:44:54.951512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.204 [2024-07-14 10:44:54.951526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.204 qpair failed and we were unable to recover it. 00:36:10.204 [2024-07-14 10:44:54.961417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.204 [2024-07-14 10:44:54.961480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.204 [2024-07-14 10:44:54.961493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.204 [2024-07-14 10:44:54.961500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.204 [2024-07-14 10:44:54.961505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.204 [2024-07-14 10:44:54.961519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.204 qpair failed and we were unable to recover it. 00:36:10.204 [2024-07-14 10:44:54.971442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.204 [2024-07-14 10:44:54.971499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.204 [2024-07-14 10:44:54.971515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.204 [2024-07-14 10:44:54.971521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.204 [2024-07-14 10:44:54.971527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.204 [2024-07-14 10:44:54.971541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.204 qpair failed and we were unable to recover it. 00:36:10.204 [2024-07-14 10:44:54.981468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.204 [2024-07-14 10:44:54.981526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.204 [2024-07-14 10:44:54.981543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.204 [2024-07-14 10:44:54.981550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.204 [2024-07-14 10:44:54.981556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.204 [2024-07-14 10:44:54.981569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.204 qpair failed and we were unable to recover it. 00:36:10.204 [2024-07-14 10:44:54.991492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.204 [2024-07-14 10:44:54.991545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.204 [2024-07-14 10:44:54.991559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.205 [2024-07-14 10:44:54.991566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.205 [2024-07-14 10:44:54.991572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.205 [2024-07-14 10:44:54.991585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.205 qpair failed and we were unable to recover it. 00:36:10.205 [2024-07-14 10:44:55.001560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.205 [2024-07-14 10:44:55.001622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.205 [2024-07-14 10:44:55.001637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.205 [2024-07-14 10:44:55.001643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.205 [2024-07-14 10:44:55.001648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.205 [2024-07-14 10:44:55.001662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.205 qpair failed and we were unable to recover it. 00:36:10.205 [2024-07-14 10:44:55.011587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.205 [2024-07-14 10:44:55.011665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.205 [2024-07-14 10:44:55.011679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.205 [2024-07-14 10:44:55.011685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.205 [2024-07-14 10:44:55.011691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.205 [2024-07-14 10:44:55.011704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.205 qpair failed and we were unable to recover it. 00:36:10.205 [2024-07-14 10:44:55.021564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.205 [2024-07-14 10:44:55.021622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.205 [2024-07-14 10:44:55.021636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.205 [2024-07-14 10:44:55.021642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.205 [2024-07-14 10:44:55.021648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.205 [2024-07-14 10:44:55.021665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.205 qpair failed and we were unable to recover it. 00:36:10.205 [2024-07-14 10:44:55.031587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.205 [2024-07-14 10:44:55.031640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.205 [2024-07-14 10:44:55.031654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.205 [2024-07-14 10:44:55.031660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.205 [2024-07-14 10:44:55.031666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.205 [2024-07-14 10:44:55.031679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.205 qpair failed and we were unable to recover it. 00:36:10.205 [2024-07-14 10:44:55.041686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.205 [2024-07-14 10:44:55.041740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.205 [2024-07-14 10:44:55.041753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.205 [2024-07-14 10:44:55.041760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.205 [2024-07-14 10:44:55.041765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.205 [2024-07-14 10:44:55.041779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.205 qpair failed and we were unable to recover it. 00:36:10.205 [2024-07-14 10:44:55.051721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.205 [2024-07-14 10:44:55.051791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.205 [2024-07-14 10:44:55.051805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.205 [2024-07-14 10:44:55.051811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.205 [2024-07-14 10:44:55.051817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.205 [2024-07-14 10:44:55.051830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.205 qpair failed and we were unable to recover it. 00:36:10.205 [2024-07-14 10:44:55.061676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.205 [2024-07-14 10:44:55.061734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.205 [2024-07-14 10:44:55.061748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.205 [2024-07-14 10:44:55.061754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.205 [2024-07-14 10:44:55.061759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.205 [2024-07-14 10:44:55.061773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.205 qpair failed and we were unable to recover it. 00:36:10.205 [2024-07-14 10:44:55.071732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.205 [2024-07-14 10:44:55.071785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.205 [2024-07-14 10:44:55.071802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.205 [2024-07-14 10:44:55.071809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.205 [2024-07-14 10:44:55.071814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.205 [2024-07-14 10:44:55.071828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.205 qpair failed and we were unable to recover it. 00:36:10.205 [2024-07-14 10:44:55.081751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.205 [2024-07-14 10:44:55.081814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.205 [2024-07-14 10:44:55.081828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.205 [2024-07-14 10:44:55.081834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.205 [2024-07-14 10:44:55.081840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.205 [2024-07-14 10:44:55.081853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.205 qpair failed and we were unable to recover it. 00:36:10.205 [2024-07-14 10:44:55.091724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.205 [2024-07-14 10:44:55.091787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.205 [2024-07-14 10:44:55.091801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.205 [2024-07-14 10:44:55.091807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.205 [2024-07-14 10:44:55.091813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.205 [2024-07-14 10:44:55.091826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.205 qpair failed and we were unable to recover it. 00:36:10.205 [2024-07-14 10:44:55.101858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.205 [2024-07-14 10:44:55.101915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.205 [2024-07-14 10:44:55.101929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.205 [2024-07-14 10:44:55.101935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.205 [2024-07-14 10:44:55.101941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.205 [2024-07-14 10:44:55.101954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.205 qpair failed and we were unable to recover it. 00:36:10.205 [2024-07-14 10:44:55.111787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.205 [2024-07-14 10:44:55.111846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.205 [2024-07-14 10:44:55.111860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.205 [2024-07-14 10:44:55.111866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.205 [2024-07-14 10:44:55.111876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.205 [2024-07-14 10:44:55.111890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.205 qpair failed and we were unable to recover it. 00:36:10.205 [2024-07-14 10:44:55.121818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.205 [2024-07-14 10:44:55.121876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.205 [2024-07-14 10:44:55.121889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.205 [2024-07-14 10:44:55.121895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.205 [2024-07-14 10:44:55.121901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.205 [2024-07-14 10:44:55.121915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.205 qpair failed and we were unable to recover it. 00:36:10.205 [2024-07-14 10:44:55.131974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.205 [2024-07-14 10:44:55.132037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.205 [2024-07-14 10:44:55.132051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.205 [2024-07-14 10:44:55.132057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.206 [2024-07-14 10:44:55.132063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.206 [2024-07-14 10:44:55.132076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.206 qpair failed and we were unable to recover it. 00:36:10.206 [2024-07-14 10:44:55.141927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.206 [2024-07-14 10:44:55.142000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.206 [2024-07-14 10:44:55.142014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.206 [2024-07-14 10:44:55.142020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.206 [2024-07-14 10:44:55.142026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.206 [2024-07-14 10:44:55.142039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.206 qpair failed and we were unable to recover it. 00:36:10.206 [2024-07-14 10:44:55.151954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.206 [2024-07-14 10:44:55.152011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.206 [2024-07-14 10:44:55.152025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.206 [2024-07-14 10:44:55.152031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.206 [2024-07-14 10:44:55.152037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.206 [2024-07-14 10:44:55.152050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.206 qpair failed and we were unable to recover it. 00:36:10.206 [2024-07-14 10:44:55.162022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.206 [2024-07-14 10:44:55.162086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.206 [2024-07-14 10:44:55.162100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.206 [2024-07-14 10:44:55.162106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.206 [2024-07-14 10:44:55.162111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.206 [2024-07-14 10:44:55.162125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.206 qpair failed and we were unable to recover it. 00:36:10.206 [2024-07-14 10:44:55.172016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.206 [2024-07-14 10:44:55.172093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.206 [2024-07-14 10:44:55.172107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.206 [2024-07-14 10:44:55.172113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.206 [2024-07-14 10:44:55.172119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.206 [2024-07-14 10:44:55.172132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.206 qpair failed and we were unable to recover it. 00:36:10.206 [2024-07-14 10:44:55.182055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.206 [2024-07-14 10:44:55.182114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.206 [2024-07-14 10:44:55.182128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.206 [2024-07-14 10:44:55.182134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.206 [2024-07-14 10:44:55.182140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.206 [2024-07-14 10:44:55.182154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.206 qpair failed and we were unable to recover it. 00:36:10.466 [2024-07-14 10:44:55.192121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.466 [2024-07-14 10:44:55.192234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.466 [2024-07-14 10:44:55.192249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.466 [2024-07-14 10:44:55.192255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.466 [2024-07-14 10:44:55.192261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.466 [2024-07-14 10:44:55.192275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.466 qpair failed and we were unable to recover it. 00:36:10.466 [2024-07-14 10:44:55.202117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.466 [2024-07-14 10:44:55.202172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.466 [2024-07-14 10:44:55.202186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.466 [2024-07-14 10:44:55.202193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.466 [2024-07-14 10:44:55.202201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.466 [2024-07-14 10:44:55.202215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.466 qpair failed and we were unable to recover it. 00:36:10.466 [2024-07-14 10:44:55.212164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.466 [2024-07-14 10:44:55.212218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.466 [2024-07-14 10:44:55.212238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.466 [2024-07-14 10:44:55.212244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.466 [2024-07-14 10:44:55.212250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.466 [2024-07-14 10:44:55.212263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.466 qpair failed and we were unable to recover it. 00:36:10.466 [2024-07-14 10:44:55.222166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.466 [2024-07-14 10:44:55.222221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.466 [2024-07-14 10:44:55.222240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.466 [2024-07-14 10:44:55.222246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.466 [2024-07-14 10:44:55.222252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.466 [2024-07-14 10:44:55.222266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.466 qpair failed and we were unable to recover it. 00:36:10.466 [2024-07-14 10:44:55.232246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.466 [2024-07-14 10:44:55.232299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.466 [2024-07-14 10:44:55.232313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.466 [2024-07-14 10:44:55.232320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.466 [2024-07-14 10:44:55.232326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.466 [2024-07-14 10:44:55.232340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.466 qpair failed and we were unable to recover it. 00:36:10.466 [2024-07-14 10:44:55.242213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.466 [2024-07-14 10:44:55.242275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.466 [2024-07-14 10:44:55.242289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.466 [2024-07-14 10:44:55.242295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.466 [2024-07-14 10:44:55.242301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.466 [2024-07-14 10:44:55.242315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.466 qpair failed and we were unable to recover it. 00:36:10.466 [2024-07-14 10:44:55.252271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.466 [2024-07-14 10:44:55.252325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.466 [2024-07-14 10:44:55.252340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.466 [2024-07-14 10:44:55.252346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.466 [2024-07-14 10:44:55.252352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.466 [2024-07-14 10:44:55.252366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.466 qpair failed and we were unable to recover it. 00:36:10.466 [2024-07-14 10:44:55.262272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.466 [2024-07-14 10:44:55.262332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.466 [2024-07-14 10:44:55.262346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.466 [2024-07-14 10:44:55.262352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.466 [2024-07-14 10:44:55.262358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.466 [2024-07-14 10:44:55.262372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.466 qpair failed and we were unable to recover it. 00:36:10.466 [2024-07-14 10:44:55.272342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.466 [2024-07-14 10:44:55.272403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.466 [2024-07-14 10:44:55.272416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.466 [2024-07-14 10:44:55.272423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.466 [2024-07-14 10:44:55.272429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.466 [2024-07-14 10:44:55.272443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.466 qpair failed and we were unable to recover it. 00:36:10.466 [2024-07-14 10:44:55.282330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.466 [2024-07-14 10:44:55.282427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.466 [2024-07-14 10:44:55.282441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.466 [2024-07-14 10:44:55.282448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.466 [2024-07-14 10:44:55.282454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.466 [2024-07-14 10:44:55.282468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.466 qpair failed and we were unable to recover it. 00:36:10.466 [2024-07-14 10:44:55.292430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.466 [2024-07-14 10:44:55.292509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.466 [2024-07-14 10:44:55.292523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.466 [2024-07-14 10:44:55.292533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.466 [2024-07-14 10:44:55.292538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.466 [2024-07-14 10:44:55.292552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.466 qpair failed and we were unable to recover it. 00:36:10.466 [2024-07-14 10:44:55.302369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.467 [2024-07-14 10:44:55.302445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.467 [2024-07-14 10:44:55.302459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.467 [2024-07-14 10:44:55.302465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.467 [2024-07-14 10:44:55.302471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.467 [2024-07-14 10:44:55.302485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.467 qpair failed and we were unable to recover it. 00:36:10.467 [2024-07-14 10:44:55.312366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.467 [2024-07-14 10:44:55.312422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.467 [2024-07-14 10:44:55.312435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.467 [2024-07-14 10:44:55.312442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.467 [2024-07-14 10:44:55.312448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.467 [2024-07-14 10:44:55.312461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.467 qpair failed and we were unable to recover it. 00:36:10.467 [2024-07-14 10:44:55.322473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.467 [2024-07-14 10:44:55.322538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.467 [2024-07-14 10:44:55.322552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.467 [2024-07-14 10:44:55.322558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.467 [2024-07-14 10:44:55.322564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.467 [2024-07-14 10:44:55.322579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.467 qpair failed and we were unable to recover it. 00:36:10.467 [2024-07-14 10:44:55.332426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.467 [2024-07-14 10:44:55.332516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.467 [2024-07-14 10:44:55.332533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.467 [2024-07-14 10:44:55.332539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.467 [2024-07-14 10:44:55.332545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.467 [2024-07-14 10:44:55.332559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.467 qpair failed and we were unable to recover it. 00:36:10.467 [2024-07-14 10:44:55.342495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.467 [2024-07-14 10:44:55.342557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.467 [2024-07-14 10:44:55.342571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.467 [2024-07-14 10:44:55.342577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.467 [2024-07-14 10:44:55.342583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.467 [2024-07-14 10:44:55.342597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.467 qpair failed and we were unable to recover it. 00:36:10.467 [2024-07-14 10:44:55.352533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.467 [2024-07-14 10:44:55.352584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.467 [2024-07-14 10:44:55.352598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.467 [2024-07-14 10:44:55.352605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.467 [2024-07-14 10:44:55.352611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.467 [2024-07-14 10:44:55.352625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.467 qpair failed and we were unable to recover it. 00:36:10.467 [2024-07-14 10:44:55.362501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.467 [2024-07-14 10:44:55.362552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.467 [2024-07-14 10:44:55.362566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.467 [2024-07-14 10:44:55.362573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.467 [2024-07-14 10:44:55.362578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.467 [2024-07-14 10:44:55.362593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.467 qpair failed and we were unable to recover it. 00:36:10.467 [2024-07-14 10:44:55.372555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.467 [2024-07-14 10:44:55.372611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.467 [2024-07-14 10:44:55.372625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.467 [2024-07-14 10:44:55.372632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.467 [2024-07-14 10:44:55.372638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.467 [2024-07-14 10:44:55.372652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.467 qpair failed and we were unable to recover it. 00:36:10.467 [2024-07-14 10:44:55.382608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.467 [2024-07-14 10:44:55.382664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.467 [2024-07-14 10:44:55.382684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.467 [2024-07-14 10:44:55.382691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.467 [2024-07-14 10:44:55.382698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.467 [2024-07-14 10:44:55.382713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.467 qpair failed and we were unable to recover it. 00:36:10.467 [2024-07-14 10:44:55.392652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.467 [2024-07-14 10:44:55.392702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.467 [2024-07-14 10:44:55.392716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.467 [2024-07-14 10:44:55.392723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.467 [2024-07-14 10:44:55.392729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.467 [2024-07-14 10:44:55.392743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.467 qpair failed and we were unable to recover it. 00:36:10.467 [2024-07-14 10:44:55.402694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.467 [2024-07-14 10:44:55.402750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.467 [2024-07-14 10:44:55.402764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.467 [2024-07-14 10:44:55.402771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.467 [2024-07-14 10:44:55.402777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.467 [2024-07-14 10:44:55.402792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.467 qpair failed and we were unable to recover it. 00:36:10.467 [2024-07-14 10:44:55.412713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.467 [2024-07-14 10:44:55.412769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.467 [2024-07-14 10:44:55.412783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.467 [2024-07-14 10:44:55.412789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.467 [2024-07-14 10:44:55.412796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.467 [2024-07-14 10:44:55.412810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.467 qpair failed and we were unable to recover it. 00:36:10.467 [2024-07-14 10:44:55.422724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.467 [2024-07-14 10:44:55.422776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.467 [2024-07-14 10:44:55.422790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.467 [2024-07-14 10:44:55.422797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.467 [2024-07-14 10:44:55.422803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.467 [2024-07-14 10:44:55.422820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.467 qpair failed and we were unable to recover it. 00:36:10.467 [2024-07-14 10:44:55.432771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.467 [2024-07-14 10:44:55.432853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.467 [2024-07-14 10:44:55.432867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.467 [2024-07-14 10:44:55.432874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.467 [2024-07-14 10:44:55.432880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.467 [2024-07-14 10:44:55.432894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.467 qpair failed and we were unable to recover it. 00:36:10.467 [2024-07-14 10:44:55.442834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.468 [2024-07-14 10:44:55.442892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.468 [2024-07-14 10:44:55.442907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.468 [2024-07-14 10:44:55.442914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.468 [2024-07-14 10:44:55.442921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.468 [2024-07-14 10:44:55.442936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.468 qpair failed and we were unable to recover it. 00:36:10.728 [2024-07-14 10:44:55.452811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.728 [2024-07-14 10:44:55.452867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.728 [2024-07-14 10:44:55.452884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.728 [2024-07-14 10:44:55.452891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.728 [2024-07-14 10:44:55.452898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.728 [2024-07-14 10:44:55.452913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-07-14 10:44:55.462847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.728 [2024-07-14 10:44:55.462906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.728 [2024-07-14 10:44:55.462920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.728 [2024-07-14 10:44:55.462927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.728 [2024-07-14 10:44:55.462934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.728 [2024-07-14 10:44:55.462948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-07-14 10:44:55.472830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.728 [2024-07-14 10:44:55.472887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.728 [2024-07-14 10:44:55.472904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.728 [2024-07-14 10:44:55.472911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.728 [2024-07-14 10:44:55.472917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.728 [2024-07-14 10:44:55.472931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-07-14 10:44:55.482945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.728 [2024-07-14 10:44:55.482995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.728 [2024-07-14 10:44:55.483009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.728 [2024-07-14 10:44:55.483016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.728 [2024-07-14 10:44:55.483023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.728 [2024-07-14 10:44:55.483037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.728 [2024-07-14 10:44:55.492960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.728 [2024-07-14 10:44:55.493043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.728 [2024-07-14 10:44:55.493056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.728 [2024-07-14 10:44:55.493063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.728 [2024-07-14 10:44:55.493070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.728 [2024-07-14 10:44:55.493085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.728 qpair failed and we were unable to recover it. 00:36:10.729 [2024-07-14 10:44:55.503021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.729 [2024-07-14 10:44:55.503081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.729 [2024-07-14 10:44:55.503095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.729 [2024-07-14 10:44:55.503102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.729 [2024-07-14 10:44:55.503108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.729 [2024-07-14 10:44:55.503122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-07-14 10:44:55.512936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.729 [2024-07-14 10:44:55.512993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.729 [2024-07-14 10:44:55.513007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.729 [2024-07-14 10:44:55.513014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.729 [2024-07-14 10:44:55.513020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.729 [2024-07-14 10:44:55.513037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-07-14 10:44:55.523038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.729 [2024-07-14 10:44:55.523095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.729 [2024-07-14 10:44:55.523112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.729 [2024-07-14 10:44:55.523119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.729 [2024-07-14 10:44:55.523125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.729 [2024-07-14 10:44:55.523141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-07-14 10:44:55.533007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.729 [2024-07-14 10:44:55.533066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.729 [2024-07-14 10:44:55.533080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.729 [2024-07-14 10:44:55.533087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.729 [2024-07-14 10:44:55.533094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.729 [2024-07-14 10:44:55.533108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-07-14 10:44:55.543054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.729 [2024-07-14 10:44:55.543110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.729 [2024-07-14 10:44:55.543124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.729 [2024-07-14 10:44:55.543131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.729 [2024-07-14 10:44:55.543138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.729 [2024-07-14 10:44:55.543152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-07-14 10:44:55.553047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.729 [2024-07-14 10:44:55.553138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.729 [2024-07-14 10:44:55.553152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.729 [2024-07-14 10:44:55.553159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.729 [2024-07-14 10:44:55.553164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.729 [2024-07-14 10:44:55.553179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-07-14 10:44:55.563119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.729 [2024-07-14 10:44:55.563178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.729 [2024-07-14 10:44:55.563193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.729 [2024-07-14 10:44:55.563200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.729 [2024-07-14 10:44:55.563206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.729 [2024-07-14 10:44:55.563220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-07-14 10:44:55.573174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.729 [2024-07-14 10:44:55.573235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.729 [2024-07-14 10:44:55.573249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.729 [2024-07-14 10:44:55.573256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.729 [2024-07-14 10:44:55.573263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.729 [2024-07-14 10:44:55.573277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-07-14 10:44:55.583192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.729 [2024-07-14 10:44:55.583256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.729 [2024-07-14 10:44:55.583270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.729 [2024-07-14 10:44:55.583277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.729 [2024-07-14 10:44:55.583283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.729 [2024-07-14 10:44:55.583297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-07-14 10:44:55.593230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.729 [2024-07-14 10:44:55.593289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.729 [2024-07-14 10:44:55.593303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.729 [2024-07-14 10:44:55.593309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.729 [2024-07-14 10:44:55.593315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.729 [2024-07-14 10:44:55.593328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-07-14 10:44:55.603295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.729 [2024-07-14 10:44:55.603351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.729 [2024-07-14 10:44:55.603365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.729 [2024-07-14 10:44:55.603372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.729 [2024-07-14 10:44:55.603381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.729 [2024-07-14 10:44:55.603395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-07-14 10:44:55.613291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.729 [2024-07-14 10:44:55.613346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.729 [2024-07-14 10:44:55.613359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.729 [2024-07-14 10:44:55.613367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.729 [2024-07-14 10:44:55.613373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.729 [2024-07-14 10:44:55.613387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-07-14 10:44:55.623308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.729 [2024-07-14 10:44:55.623378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.729 [2024-07-14 10:44:55.623392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.729 [2024-07-14 10:44:55.623399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.729 [2024-07-14 10:44:55.623405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.729 [2024-07-14 10:44:55.623419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-07-14 10:44:55.633352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.729 [2024-07-14 10:44:55.633407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.729 [2024-07-14 10:44:55.633421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.729 [2024-07-14 10:44:55.633428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.729 [2024-07-14 10:44:55.633435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.729 [2024-07-14 10:44:55.633448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.729 qpair failed and we were unable to recover it. 00:36:10.729 [2024-07-14 10:44:55.643381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.730 [2024-07-14 10:44:55.643436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.730 [2024-07-14 10:44:55.643451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.730 [2024-07-14 10:44:55.643458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.730 [2024-07-14 10:44:55.643464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.730 [2024-07-14 10:44:55.643478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-07-14 10:44:55.653402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.730 [2024-07-14 10:44:55.653460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.730 [2024-07-14 10:44:55.653475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.730 [2024-07-14 10:44:55.653482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.730 [2024-07-14 10:44:55.653488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.730 [2024-07-14 10:44:55.653502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-07-14 10:44:55.663429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.730 [2024-07-14 10:44:55.663482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.730 [2024-07-14 10:44:55.663496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.730 [2024-07-14 10:44:55.663503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.730 [2024-07-14 10:44:55.663509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.730 [2024-07-14 10:44:55.663523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-07-14 10:44:55.673470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.730 [2024-07-14 10:44:55.673530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.730 [2024-07-14 10:44:55.673544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.730 [2024-07-14 10:44:55.673551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.730 [2024-07-14 10:44:55.673558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.730 [2024-07-14 10:44:55.673572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-07-14 10:44:55.683489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.730 [2024-07-14 10:44:55.683540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.730 [2024-07-14 10:44:55.683554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.730 [2024-07-14 10:44:55.683560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.730 [2024-07-14 10:44:55.683566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.730 [2024-07-14 10:44:55.683581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-07-14 10:44:55.693452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.730 [2024-07-14 10:44:55.693521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.730 [2024-07-14 10:44:55.693536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.730 [2024-07-14 10:44:55.693546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.730 [2024-07-14 10:44:55.693552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.730 [2024-07-14 10:44:55.693566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.730 [2024-07-14 10:44:55.703549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.730 [2024-07-14 10:44:55.703611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.730 [2024-07-14 10:44:55.703625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.730 [2024-07-14 10:44:55.703632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.730 [2024-07-14 10:44:55.703638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.730 [2024-07-14 10:44:55.703652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.730 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-14 10:44:55.713616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.990 [2024-07-14 10:44:55.713723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.990 [2024-07-14 10:44:55.713739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.990 [2024-07-14 10:44:55.713746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.990 [2024-07-14 10:44:55.713753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.990 [2024-07-14 10:44:55.713767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-14 10:44:55.723553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.990 [2024-07-14 10:44:55.723638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.990 [2024-07-14 10:44:55.723652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.990 [2024-07-14 10:44:55.723659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.990 [2024-07-14 10:44:55.723665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.990 [2024-07-14 10:44:55.723679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-14 10:44:55.733632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.990 [2024-07-14 10:44:55.733702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.990 [2024-07-14 10:44:55.733717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.990 [2024-07-14 10:44:55.733723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.990 [2024-07-14 10:44:55.733729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.990 [2024-07-14 10:44:55.733743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-14 10:44:55.743640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.990 [2024-07-14 10:44:55.743700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.990 [2024-07-14 10:44:55.743714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.990 [2024-07-14 10:44:55.743721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.990 [2024-07-14 10:44:55.743727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.990 [2024-07-14 10:44:55.743741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-14 10:44:55.753637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.990 [2024-07-14 10:44:55.753695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.990 [2024-07-14 10:44:55.753710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.990 [2024-07-14 10:44:55.753717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.990 [2024-07-14 10:44:55.753724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.990 [2024-07-14 10:44:55.753738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-14 10:44:55.763722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.990 [2024-07-14 10:44:55.763777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.990 [2024-07-14 10:44:55.763792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.990 [2024-07-14 10:44:55.763799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.990 [2024-07-14 10:44:55.763806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.990 [2024-07-14 10:44:55.763820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-14 10:44:55.773750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.990 [2024-07-14 10:44:55.773803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.990 [2024-07-14 10:44:55.773817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.990 [2024-07-14 10:44:55.773825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.990 [2024-07-14 10:44:55.773831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.990 [2024-07-14 10:44:55.773846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-14 10:44:55.783780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.990 [2024-07-14 10:44:55.783835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.990 [2024-07-14 10:44:55.783852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.990 [2024-07-14 10:44:55.783859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.990 [2024-07-14 10:44:55.783865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.990 [2024-07-14 10:44:55.783879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-14 10:44:55.793793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.990 [2024-07-14 10:44:55.793847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.990 [2024-07-14 10:44:55.793861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.990 [2024-07-14 10:44:55.793869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.990 [2024-07-14 10:44:55.793875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.990 [2024-07-14 10:44:55.793889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-14 10:44:55.803828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.990 [2024-07-14 10:44:55.803883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.990 [2024-07-14 10:44:55.803896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.990 [2024-07-14 10:44:55.803904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.990 [2024-07-14 10:44:55.803910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.990 [2024-07-14 10:44:55.803924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-14 10:44:55.813863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.990 [2024-07-14 10:44:55.813915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.990 [2024-07-14 10:44:55.813930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.990 [2024-07-14 10:44:55.813936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.990 [2024-07-14 10:44:55.813942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.991 [2024-07-14 10:44:55.813956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-14 10:44:55.823888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.991 [2024-07-14 10:44:55.823939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.991 [2024-07-14 10:44:55.823953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.991 [2024-07-14 10:44:55.823959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.991 [2024-07-14 10:44:55.823966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.991 [2024-07-14 10:44:55.823980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-14 10:44:55.833884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.991 [2024-07-14 10:44:55.833938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.991 [2024-07-14 10:44:55.833952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.991 [2024-07-14 10:44:55.833959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.991 [2024-07-14 10:44:55.833966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.991 [2024-07-14 10:44:55.833980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-14 10:44:55.844013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.991 [2024-07-14 10:44:55.844071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.991 [2024-07-14 10:44:55.844085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.991 [2024-07-14 10:44:55.844092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.991 [2024-07-14 10:44:55.844099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.991 [2024-07-14 10:44:55.844113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-14 10:44:55.853997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.991 [2024-07-14 10:44:55.854050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.991 [2024-07-14 10:44:55.854064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.991 [2024-07-14 10:44:55.854071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.991 [2024-07-14 10:44:55.854077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.991 [2024-07-14 10:44:55.854091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-14 10:44:55.864005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.991 [2024-07-14 10:44:55.864066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.991 [2024-07-14 10:44:55.864080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.991 [2024-07-14 10:44:55.864087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.991 [2024-07-14 10:44:55.864093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.991 [2024-07-14 10:44:55.864107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-14 10:44:55.874076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.991 [2024-07-14 10:44:55.874139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.991 [2024-07-14 10:44:55.874156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.991 [2024-07-14 10:44:55.874164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.991 [2024-07-14 10:44:55.874170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.991 [2024-07-14 10:44:55.874184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-14 10:44:55.884100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.991 [2024-07-14 10:44:55.884159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.991 [2024-07-14 10:44:55.884173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.991 [2024-07-14 10:44:55.884181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.991 [2024-07-14 10:44:55.884187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.991 [2024-07-14 10:44:55.884201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-14 10:44:55.894090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.991 [2024-07-14 10:44:55.894156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.991 [2024-07-14 10:44:55.894170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.991 [2024-07-14 10:44:55.894177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.991 [2024-07-14 10:44:55.894183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.991 [2024-07-14 10:44:55.894197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-14 10:44:55.904118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.991 [2024-07-14 10:44:55.904177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.991 [2024-07-14 10:44:55.904191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.991 [2024-07-14 10:44:55.904198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.991 [2024-07-14 10:44:55.904204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.991 [2024-07-14 10:44:55.904218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-14 10:44:55.914146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.991 [2024-07-14 10:44:55.914204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.991 [2024-07-14 10:44:55.914218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.991 [2024-07-14 10:44:55.914228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.991 [2024-07-14 10:44:55.914235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.991 [2024-07-14 10:44:55.914252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-14 10:44:55.924171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.991 [2024-07-14 10:44:55.924229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.991 [2024-07-14 10:44:55.924243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.991 [2024-07-14 10:44:55.924250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.991 [2024-07-14 10:44:55.924256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.991 [2024-07-14 10:44:55.924270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-14 10:44:55.934212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.991 [2024-07-14 10:44:55.934279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.991 [2024-07-14 10:44:55.934294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.991 [2024-07-14 10:44:55.934301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.991 [2024-07-14 10:44:55.934307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.991 [2024-07-14 10:44:55.934321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-14 10:44:55.944236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.991 [2024-07-14 10:44:55.944300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.991 [2024-07-14 10:44:55.944315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.991 [2024-07-14 10:44:55.944322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.991 [2024-07-14 10:44:55.944328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.991 [2024-07-14 10:44:55.944343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-14 10:44:55.954270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.991 [2024-07-14 10:44:55.954326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.991 [2024-07-14 10:44:55.954340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.991 [2024-07-14 10:44:55.954347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.991 [2024-07-14 10:44:55.954353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.991 [2024-07-14 10:44:55.954367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-14 10:44:55.964314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.992 [2024-07-14 10:44:55.964364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.992 [2024-07-14 10:44:55.964382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.992 [2024-07-14 10:44:55.964389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.992 [2024-07-14 10:44:55.964395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:10.992 [2024-07-14 10:44:55.964409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.992 qpair failed and we were unable to recover it. 00:36:11.251 [2024-07-14 10:44:55.974323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.251 [2024-07-14 10:44:55.974380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.251 [2024-07-14 10:44:55.974395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.251 [2024-07-14 10:44:55.974402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.251 [2024-07-14 10:44:55.974408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.251 [2024-07-14 10:44:55.974422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-07-14 10:44:55.984400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.251 [2024-07-14 10:44:55.984484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.251 [2024-07-14 10:44:55.984499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.251 [2024-07-14 10:44:55.984506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.251 [2024-07-14 10:44:55.984512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.251 [2024-07-14 10:44:55.984526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-07-14 10:44:55.994415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.251 [2024-07-14 10:44:55.994469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.251 [2024-07-14 10:44:55.994483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.251 [2024-07-14 10:44:55.994490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.251 [2024-07-14 10:44:55.994496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.251 [2024-07-14 10:44:55.994510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-07-14 10:44:56.004410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.251 [2024-07-14 10:44:56.004472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.251 [2024-07-14 10:44:56.004487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.251 [2024-07-14 10:44:56.004494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.251 [2024-07-14 10:44:56.004503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.251 [2024-07-14 10:44:56.004517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-07-14 10:44:56.014440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.251 [2024-07-14 10:44:56.014492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.251 [2024-07-14 10:44:56.014506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.251 [2024-07-14 10:44:56.014513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.251 [2024-07-14 10:44:56.014519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.251 [2024-07-14 10:44:56.014533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.251 qpair failed and we were unable to recover it. 00:36:11.251 [2024-07-14 10:44:56.024469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.251 [2024-07-14 10:44:56.024528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.251 [2024-07-14 10:44:56.024542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.251 [2024-07-14 10:44:56.024549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.251 [2024-07-14 10:44:56.024555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.251 [2024-07-14 10:44:56.024569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.252 qpair failed and we were unable to recover it. 00:36:11.252 [2024-07-14 10:44:56.034499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.252 [2024-07-14 10:44:56.034566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.252 [2024-07-14 10:44:56.034580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.252 [2024-07-14 10:44:56.034587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.252 [2024-07-14 10:44:56.034593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.252 [2024-07-14 10:44:56.034606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.252 qpair failed and we were unable to recover it. 00:36:11.252 [2024-07-14 10:44:56.044536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.252 [2024-07-14 10:44:56.044591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.252 [2024-07-14 10:44:56.044605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.252 [2024-07-14 10:44:56.044612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.252 [2024-07-14 10:44:56.044619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.252 [2024-07-14 10:44:56.044633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.252 qpair failed and we were unable to recover it. 00:36:11.252 [2024-07-14 10:44:56.054567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.252 [2024-07-14 10:44:56.054630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.252 [2024-07-14 10:44:56.054644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.252 [2024-07-14 10:44:56.054651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.252 [2024-07-14 10:44:56.054657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.252 [2024-07-14 10:44:56.054671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.252 qpair failed and we were unable to recover it. 00:36:11.252 [2024-07-14 10:44:56.064525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.252 [2024-07-14 10:44:56.064593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.252 [2024-07-14 10:44:56.064607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.252 [2024-07-14 10:44:56.064614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.252 [2024-07-14 10:44:56.064620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.252 [2024-07-14 10:44:56.064634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.252 qpair failed and we were unable to recover it. 00:36:11.252 [2024-07-14 10:44:56.074612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.252 [2024-07-14 10:44:56.074662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.252 [2024-07-14 10:44:56.074676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.252 [2024-07-14 10:44:56.074683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.252 [2024-07-14 10:44:56.074689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.252 [2024-07-14 10:44:56.074703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.252 qpair failed and we were unable to recover it. 00:36:11.252 [2024-07-14 10:44:56.084644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.252 [2024-07-14 10:44:56.084716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.252 [2024-07-14 10:44:56.084732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.252 [2024-07-14 10:44:56.084739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.252 [2024-07-14 10:44:56.084745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.252 [2024-07-14 10:44:56.084759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.252 qpair failed and we were unable to recover it. 00:36:11.252 [2024-07-14 10:44:56.094675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.252 [2024-07-14 10:44:56.094732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.252 [2024-07-14 10:44:56.094746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.252 [2024-07-14 10:44:56.094756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.252 [2024-07-14 10:44:56.094762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.252 [2024-07-14 10:44:56.094776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.252 qpair failed and we were unable to recover it. 00:36:11.252 [2024-07-14 10:44:56.104701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.252 [2024-07-14 10:44:56.104761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.252 [2024-07-14 10:44:56.104775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.252 [2024-07-14 10:44:56.104783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.252 [2024-07-14 10:44:56.104789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.252 [2024-07-14 10:44:56.104804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.252 qpair failed and we were unable to recover it. 00:36:11.252 [2024-07-14 10:44:56.114722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.252 [2024-07-14 10:44:56.114780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.252 [2024-07-14 10:44:56.114794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.252 [2024-07-14 10:44:56.114802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.252 [2024-07-14 10:44:56.114808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.252 [2024-07-14 10:44:56.114822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.252 qpair failed and we were unable to recover it. 00:36:11.252 [2024-07-14 10:44:56.124815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.252 [2024-07-14 10:44:56.124917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.252 [2024-07-14 10:44:56.124931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.252 [2024-07-14 10:44:56.124939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.252 [2024-07-14 10:44:56.124945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.252 [2024-07-14 10:44:56.124960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.252 qpair failed and we were unable to recover it. 00:36:11.252 [2024-07-14 10:44:56.134779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.252 [2024-07-14 10:44:56.134858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.252 [2024-07-14 10:44:56.134872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.252 [2024-07-14 10:44:56.134879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.252 [2024-07-14 10:44:56.134886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.252 [2024-07-14 10:44:56.134900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.252 qpair failed and we were unable to recover it. 00:36:11.252 [2024-07-14 10:44:56.144818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.252 [2024-07-14 10:44:56.144874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.252 [2024-07-14 10:44:56.144888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.252 [2024-07-14 10:44:56.144895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.252 [2024-07-14 10:44:56.144901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.252 [2024-07-14 10:44:56.144915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.252 qpair failed and we were unable to recover it. 00:36:11.252 [2024-07-14 10:44:56.154773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.252 [2024-07-14 10:44:56.154834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.252 [2024-07-14 10:44:56.154848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.252 [2024-07-14 10:44:56.154855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.252 [2024-07-14 10:44:56.154860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.252 [2024-07-14 10:44:56.154874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.252 qpair failed and we were unable to recover it. 00:36:11.252 [2024-07-14 10:44:56.164863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.252 [2024-07-14 10:44:56.164914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.252 [2024-07-14 10:44:56.164928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.252 [2024-07-14 10:44:56.164934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.252 [2024-07-14 10:44:56.164941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.252 [2024-07-14 10:44:56.164955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.252 qpair failed and we were unable to recover it. 00:36:11.252 [2024-07-14 10:44:56.174903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.252 [2024-07-14 10:44:56.174972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.253 [2024-07-14 10:44:56.174986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.253 [2024-07-14 10:44:56.174993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.253 [2024-07-14 10:44:56.175000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.253 [2024-07-14 10:44:56.175013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.253 qpair failed and we were unable to recover it. 00:36:11.253 [2024-07-14 10:44:56.184919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.253 [2024-07-14 10:44:56.184975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.253 [2024-07-14 10:44:56.184991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.253 [2024-07-14 10:44:56.185001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.253 [2024-07-14 10:44:56.185008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.253 [2024-07-14 10:44:56.185022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.253 qpair failed and we were unable to recover it. 00:36:11.253 [2024-07-14 10:44:56.194959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.253 [2024-07-14 10:44:56.195012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.253 [2024-07-14 10:44:56.195027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.253 [2024-07-14 10:44:56.195033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.253 [2024-07-14 10:44:56.195040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.253 [2024-07-14 10:44:56.195054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.253 qpair failed and we were unable to recover it. 00:36:11.253 [2024-07-14 10:44:56.204994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.253 [2024-07-14 10:44:56.205044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.253 [2024-07-14 10:44:56.205058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.253 [2024-07-14 10:44:56.205065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.253 [2024-07-14 10:44:56.205071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.253 [2024-07-14 10:44:56.205085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.253 qpair failed and we were unable to recover it. 00:36:11.253 [2024-07-14 10:44:56.215029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.253 [2024-07-14 10:44:56.215109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.253 [2024-07-14 10:44:56.215124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.253 [2024-07-14 10:44:56.215131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.253 [2024-07-14 10:44:56.215137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.253 [2024-07-14 10:44:56.215151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.253 qpair failed and we were unable to recover it. 00:36:11.253 [2024-07-14 10:44:56.225075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.253 [2024-07-14 10:44:56.225141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.253 [2024-07-14 10:44:56.225156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.253 [2024-07-14 10:44:56.225162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.253 [2024-07-14 10:44:56.225169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.253 [2024-07-14 10:44:56.225183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.253 qpair failed and we were unable to recover it. 00:36:11.513 [2024-07-14 10:44:56.235065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.513 [2024-07-14 10:44:56.235125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.513 [2024-07-14 10:44:56.235139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.513 [2024-07-14 10:44:56.235146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.513 [2024-07-14 10:44:56.235152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.513 [2024-07-14 10:44:56.235166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.513 qpair failed and we were unable to recover it. 00:36:11.513 [2024-07-14 10:44:56.245095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.513 [2024-07-14 10:44:56.245150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.513 [2024-07-14 10:44:56.245165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.513 [2024-07-14 10:44:56.245172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.513 [2024-07-14 10:44:56.245178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.513 [2024-07-14 10:44:56.245192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.513 qpair failed and we were unable to recover it. 00:36:11.513 [2024-07-14 10:44:56.255128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.513 [2024-07-14 10:44:56.255184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.513 [2024-07-14 10:44:56.255198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.513 [2024-07-14 10:44:56.255205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.513 [2024-07-14 10:44:56.255212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.513 [2024-07-14 10:44:56.255230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.513 qpair failed and we were unable to recover it. 00:36:11.513 [2024-07-14 10:44:56.265154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.513 [2024-07-14 10:44:56.265211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.513 [2024-07-14 10:44:56.265229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.513 [2024-07-14 10:44:56.265236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.513 [2024-07-14 10:44:56.265242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.513 [2024-07-14 10:44:56.265257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.513 qpair failed and we were unable to recover it. 00:36:11.513 [2024-07-14 10:44:56.275187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.513 [2024-07-14 10:44:56.275244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.513 [2024-07-14 10:44:56.275261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.513 [2024-07-14 10:44:56.275269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.513 [2024-07-14 10:44:56.275275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.513 [2024-07-14 10:44:56.275289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.513 qpair failed and we were unable to recover it. 00:36:11.513 [2024-07-14 10:44:56.285220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.513 [2024-07-14 10:44:56.285278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.513 [2024-07-14 10:44:56.285292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.513 [2024-07-14 10:44:56.285299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.513 [2024-07-14 10:44:56.285305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.513 [2024-07-14 10:44:56.285319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.513 qpair failed and we were unable to recover it. 00:36:11.513 [2024-07-14 10:44:56.295255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.513 [2024-07-14 10:44:56.295313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.513 [2024-07-14 10:44:56.295327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.513 [2024-07-14 10:44:56.295334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.513 [2024-07-14 10:44:56.295340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.513 [2024-07-14 10:44:56.295354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.513 qpair failed and we were unable to recover it. 00:36:11.513 [2024-07-14 10:44:56.305286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.513 [2024-07-14 10:44:56.305338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.513 [2024-07-14 10:44:56.305352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.513 [2024-07-14 10:44:56.305358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.513 [2024-07-14 10:44:56.305365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.513 [2024-07-14 10:44:56.305379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.513 qpair failed and we were unable to recover it. 00:36:11.513 [2024-07-14 10:44:56.315276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.513 [2024-07-14 10:44:56.315373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.513 [2024-07-14 10:44:56.315387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.513 [2024-07-14 10:44:56.315394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.513 [2024-07-14 10:44:56.315400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.513 [2024-07-14 10:44:56.315417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.513 qpair failed and we were unable to recover it. 00:36:11.513 [2024-07-14 10:44:56.325336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.513 [2024-07-14 10:44:56.325387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.513 [2024-07-14 10:44:56.325401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.513 [2024-07-14 10:44:56.325408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.513 [2024-07-14 10:44:56.325414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.513 [2024-07-14 10:44:56.325428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.513 qpair failed and we were unable to recover it. 00:36:11.513 [2024-07-14 10:44:56.335364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.513 [2024-07-14 10:44:56.335431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.513 [2024-07-14 10:44:56.335445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.513 [2024-07-14 10:44:56.335452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.513 [2024-07-14 10:44:56.335459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.513 [2024-07-14 10:44:56.335473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.513 qpair failed and we were unable to recover it. 00:36:11.513 [2024-07-14 10:44:56.345359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.513 [2024-07-14 10:44:56.345423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.513 [2024-07-14 10:44:56.345441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.513 [2024-07-14 10:44:56.345450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.513 [2024-07-14 10:44:56.345459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.513 [2024-07-14 10:44:56.345474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.513 qpair failed and we were unable to recover it. 00:36:11.513 [2024-07-14 10:44:56.355408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.513 [2024-07-14 10:44:56.355464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.514 [2024-07-14 10:44:56.355481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.514 [2024-07-14 10:44:56.355488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.514 [2024-07-14 10:44:56.355494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.514 [2024-07-14 10:44:56.355509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.514 qpair failed and we were unable to recover it. 00:36:11.514 [2024-07-14 10:44:56.365469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.514 [2024-07-14 10:44:56.365520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.514 [2024-07-14 10:44:56.365539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.514 [2024-07-14 10:44:56.365546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.514 [2024-07-14 10:44:56.365554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.514 [2024-07-14 10:44:56.365568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.514 qpair failed and we were unable to recover it. 00:36:11.514 [2024-07-14 10:44:56.375498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.514 [2024-07-14 10:44:56.375552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.514 [2024-07-14 10:44:56.375566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.514 [2024-07-14 10:44:56.375574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.514 [2024-07-14 10:44:56.375581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.514 [2024-07-14 10:44:56.375594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.514 qpair failed and we were unable to recover it. 00:36:11.514 [2024-07-14 10:44:56.385505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.514 [2024-07-14 10:44:56.385569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.514 [2024-07-14 10:44:56.385584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.514 [2024-07-14 10:44:56.385590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.514 [2024-07-14 10:44:56.385596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.514 [2024-07-14 10:44:56.385610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.514 qpair failed and we were unable to recover it. 00:36:11.514 [2024-07-14 10:44:56.395531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.514 [2024-07-14 10:44:56.395586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.514 [2024-07-14 10:44:56.395600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.514 [2024-07-14 10:44:56.395607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.514 [2024-07-14 10:44:56.395613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.514 [2024-07-14 10:44:56.395627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.514 qpair failed and we were unable to recover it. 00:36:11.514 [2024-07-14 10:44:56.405610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.514 [2024-07-14 10:44:56.405667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.514 [2024-07-14 10:44:56.405680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.514 [2024-07-14 10:44:56.405688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.514 [2024-07-14 10:44:56.405697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.514 [2024-07-14 10:44:56.405711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.514 qpair failed and we were unable to recover it. 00:36:11.514 [2024-07-14 10:44:56.415591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.514 [2024-07-14 10:44:56.415647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.514 [2024-07-14 10:44:56.415661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.514 [2024-07-14 10:44:56.415668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.514 [2024-07-14 10:44:56.415674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.514 [2024-07-14 10:44:56.415688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.514 qpair failed and we were unable to recover it. 00:36:11.514 [2024-07-14 10:44:56.425547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.514 [2024-07-14 10:44:56.425614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.514 [2024-07-14 10:44:56.425629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.514 [2024-07-14 10:44:56.425636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.514 [2024-07-14 10:44:56.425642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.514 [2024-07-14 10:44:56.425656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.514 qpair failed and we were unable to recover it. 00:36:11.514 [2024-07-14 10:44:56.435648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.514 [2024-07-14 10:44:56.435698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.514 [2024-07-14 10:44:56.435713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.514 [2024-07-14 10:44:56.435719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.514 [2024-07-14 10:44:56.435725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.514 [2024-07-14 10:44:56.435739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.514 qpair failed and we were unable to recover it. 00:36:11.514 [2024-07-14 10:44:56.445669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.514 [2024-07-14 10:44:56.445724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.514 [2024-07-14 10:44:56.445738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.514 [2024-07-14 10:44:56.445745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.514 [2024-07-14 10:44:56.445751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.514 [2024-07-14 10:44:56.445765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.514 qpair failed and we were unable to recover it. 00:36:11.514 [2024-07-14 10:44:56.455710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.514 [2024-07-14 10:44:56.455769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.514 [2024-07-14 10:44:56.455783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.514 [2024-07-14 10:44:56.455790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.514 [2024-07-14 10:44:56.455798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.514 [2024-07-14 10:44:56.455812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.514 qpair failed and we were unable to recover it. 00:36:11.514 [2024-07-14 10:44:56.465764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.514 [2024-07-14 10:44:56.465831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.514 [2024-07-14 10:44:56.465845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.514 [2024-07-14 10:44:56.465852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.514 [2024-07-14 10:44:56.465858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.514 [2024-07-14 10:44:56.465872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.514 qpair failed and we were unable to recover it. 00:36:11.514 [2024-07-14 10:44:56.475797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.514 [2024-07-14 10:44:56.475864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.514 [2024-07-14 10:44:56.475878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.514 [2024-07-14 10:44:56.475885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.514 [2024-07-14 10:44:56.475891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.514 [2024-07-14 10:44:56.475905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.514 qpair failed and we were unable to recover it. 00:36:11.514 [2024-07-14 10:44:56.485838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.514 [2024-07-14 10:44:56.485939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.514 [2024-07-14 10:44:56.485953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.514 [2024-07-14 10:44:56.485960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.514 [2024-07-14 10:44:56.485967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.514 [2024-07-14 10:44:56.485982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.514 qpair failed and we were unable to recover it. 00:36:11.775 [2024-07-14 10:44:56.495821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.775 [2024-07-14 10:44:56.495877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.775 [2024-07-14 10:44:56.495891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.775 [2024-07-14 10:44:56.495902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.775 [2024-07-14 10:44:56.495908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.775 [2024-07-14 10:44:56.495923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.775 qpair failed and we were unable to recover it. 00:36:11.775 [2024-07-14 10:44:56.505846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.775 [2024-07-14 10:44:56.505902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.775 [2024-07-14 10:44:56.505916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.775 [2024-07-14 10:44:56.505922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.775 [2024-07-14 10:44:56.505929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.775 [2024-07-14 10:44:56.505943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.775 qpair failed and we were unable to recover it. 00:36:11.775 [2024-07-14 10:44:56.515869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.775 [2024-07-14 10:44:56.515926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.775 [2024-07-14 10:44:56.515940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.775 [2024-07-14 10:44:56.515947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.775 [2024-07-14 10:44:56.515954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.775 [2024-07-14 10:44:56.515968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.775 qpair failed and we were unable to recover it. 00:36:11.775 [2024-07-14 10:44:56.525898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.775 [2024-07-14 10:44:56.525951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.775 [2024-07-14 10:44:56.525965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.775 [2024-07-14 10:44:56.525973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.775 [2024-07-14 10:44:56.525979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.775 [2024-07-14 10:44:56.525993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.775 qpair failed and we were unable to recover it. 00:36:11.775 [2024-07-14 10:44:56.535859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.775 [2024-07-14 10:44:56.535916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.775 [2024-07-14 10:44:56.535930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.775 [2024-07-14 10:44:56.535938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.775 [2024-07-14 10:44:56.535944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.775 [2024-07-14 10:44:56.535958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.775 qpair failed and we were unable to recover it. 00:36:11.775 [2024-07-14 10:44:56.545984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.775 [2024-07-14 10:44:56.546038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.775 [2024-07-14 10:44:56.546052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.775 [2024-07-14 10:44:56.546059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.775 [2024-07-14 10:44:56.546065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.775 [2024-07-14 10:44:56.546079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.775 qpair failed and we were unable to recover it. 00:36:11.775 [2024-07-14 10:44:56.555987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.775 [2024-07-14 10:44:56.556049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.775 [2024-07-14 10:44:56.556063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.775 [2024-07-14 10:44:56.556070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.775 [2024-07-14 10:44:56.556076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.775 [2024-07-14 10:44:56.556091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.775 qpair failed and we were unable to recover it. 00:36:11.775 [2024-07-14 10:44:56.566016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.775 [2024-07-14 10:44:56.566073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.775 [2024-07-14 10:44:56.566087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.775 [2024-07-14 10:44:56.566094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.775 [2024-07-14 10:44:56.566100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.775 [2024-07-14 10:44:56.566114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.775 qpair failed and we were unable to recover it. 00:36:11.775 [2024-07-14 10:44:56.576021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.775 [2024-07-14 10:44:56.576089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.775 [2024-07-14 10:44:56.576104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.775 [2024-07-14 10:44:56.576111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.775 [2024-07-14 10:44:56.576117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.775 [2024-07-14 10:44:56.576131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.775 qpair failed and we were unable to recover it. 00:36:11.775 [2024-07-14 10:44:56.586061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.775 [2024-07-14 10:44:56.586118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.775 [2024-07-14 10:44:56.586133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.775 [2024-07-14 10:44:56.586144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.775 [2024-07-14 10:44:56.586150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.775 [2024-07-14 10:44:56.586165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.775 qpair failed and we were unable to recover it. 00:36:11.775 [2024-07-14 10:44:56.596104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.775 [2024-07-14 10:44:56.596160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.776 [2024-07-14 10:44:56.596175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.776 [2024-07-14 10:44:56.596182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.776 [2024-07-14 10:44:56.596188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.776 [2024-07-14 10:44:56.596202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.776 qpair failed and we were unable to recover it. 00:36:11.776 [2024-07-14 10:44:56.606165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.776 [2024-07-14 10:44:56.606222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.776 [2024-07-14 10:44:56.606240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.776 [2024-07-14 10:44:56.606247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.776 [2024-07-14 10:44:56.606253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.776 [2024-07-14 10:44:56.606268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.776 qpair failed and we were unable to recover it. 00:36:11.776 [2024-07-14 10:44:56.616135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.776 [2024-07-14 10:44:56.616193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.776 [2024-07-14 10:44:56.616207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.776 [2024-07-14 10:44:56.616215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.776 [2024-07-14 10:44:56.616222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.776 [2024-07-14 10:44:56.616242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.776 qpair failed and we were unable to recover it. 00:36:11.776 [2024-07-14 10:44:56.626113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.776 [2024-07-14 10:44:56.626186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.776 [2024-07-14 10:44:56.626200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.776 [2024-07-14 10:44:56.626207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.776 [2024-07-14 10:44:56.626213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.776 [2024-07-14 10:44:56.626233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.776 qpair failed and we were unable to recover it. 00:36:11.776 [2024-07-14 10:44:56.636155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.776 [2024-07-14 10:44:56.636208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.776 [2024-07-14 10:44:56.636223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.776 [2024-07-14 10:44:56.636233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.776 [2024-07-14 10:44:56.636240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.776 [2024-07-14 10:44:56.636254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.776 qpair failed and we were unable to recover it. 00:36:11.776 [2024-07-14 10:44:56.646246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.776 [2024-07-14 10:44:56.646319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.776 [2024-07-14 10:44:56.646334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.776 [2024-07-14 10:44:56.646341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.776 [2024-07-14 10:44:56.646347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.776 [2024-07-14 10:44:56.646362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.776 qpair failed and we were unable to recover it. 00:36:11.776 [2024-07-14 10:44:56.656264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.776 [2024-07-14 10:44:56.656322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.776 [2024-07-14 10:44:56.656337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.776 [2024-07-14 10:44:56.656345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.776 [2024-07-14 10:44:56.656351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.776 [2024-07-14 10:44:56.656368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.776 qpair failed and we were unable to recover it. 00:36:11.776 [2024-07-14 10:44:56.666305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.776 [2024-07-14 10:44:56.666361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.776 [2024-07-14 10:44:56.666376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.776 [2024-07-14 10:44:56.666384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.776 [2024-07-14 10:44:56.666390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.776 [2024-07-14 10:44:56.666405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.776 qpair failed and we were unable to recover it. 00:36:11.776 [2024-07-14 10:44:56.676366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.776 [2024-07-14 10:44:56.676469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.776 [2024-07-14 10:44:56.676489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.776 [2024-07-14 10:44:56.676496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.776 [2024-07-14 10:44:56.676502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.776 [2024-07-14 10:44:56.676517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.776 qpair failed and we were unable to recover it. 00:36:11.776 [2024-07-14 10:44:56.686363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.776 [2024-07-14 10:44:56.686425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.776 [2024-07-14 10:44:56.686440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.776 [2024-07-14 10:44:56.686446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.776 [2024-07-14 10:44:56.686452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.776 [2024-07-14 10:44:56.686466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.776 qpair failed and we were unable to recover it. 00:36:11.776 [2024-07-14 10:44:56.696429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.776 [2024-07-14 10:44:56.696512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.776 [2024-07-14 10:44:56.696526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.776 [2024-07-14 10:44:56.696533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.776 [2024-07-14 10:44:56.696539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.776 [2024-07-14 10:44:56.696553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.776 qpair failed and we were unable to recover it. 00:36:11.776 [2024-07-14 10:44:56.706406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.776 [2024-07-14 10:44:56.706460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.776 [2024-07-14 10:44:56.706474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.776 [2024-07-14 10:44:56.706481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.776 [2024-07-14 10:44:56.706488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.776 [2024-07-14 10:44:56.706502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.776 qpair failed and we were unable to recover it. 00:36:11.776 [2024-07-14 10:44:56.716353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.776 [2024-07-14 10:44:56.716408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.776 [2024-07-14 10:44:56.716423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.776 [2024-07-14 10:44:56.716430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.776 [2024-07-14 10:44:56.716436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.776 [2024-07-14 10:44:56.716453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.776 qpair failed and we were unable to recover it. 00:36:11.776 [2024-07-14 10:44:56.726435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.776 [2024-07-14 10:44:56.726498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.776 [2024-07-14 10:44:56.726512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.776 [2024-07-14 10:44:56.726519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.776 [2024-07-14 10:44:56.726526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.776 [2024-07-14 10:44:56.726540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.776 qpair failed and we were unable to recover it. 00:36:11.776 [2024-07-14 10:44:56.736485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.777 [2024-07-14 10:44:56.736539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.777 [2024-07-14 10:44:56.736553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.777 [2024-07-14 10:44:56.736560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.777 [2024-07-14 10:44:56.736567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.777 [2024-07-14 10:44:56.736581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.777 qpair failed and we were unable to recover it. 00:36:11.777 [2024-07-14 10:44:56.746523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.777 [2024-07-14 10:44:56.746583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.777 [2024-07-14 10:44:56.746598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.777 [2024-07-14 10:44:56.746605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.777 [2024-07-14 10:44:56.746612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:11.777 [2024-07-14 10:44:56.746626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.777 qpair failed and we were unable to recover it. 00:36:12.038 [2024-07-14 10:44:56.756518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.038 [2024-07-14 10:44:56.756583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.038 [2024-07-14 10:44:56.756597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.038 [2024-07-14 10:44:56.756604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.038 [2024-07-14 10:44:56.756611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.038 [2024-07-14 10:44:56.756625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.038 qpair failed and we were unable to recover it. 00:36:12.038 [2024-07-14 10:44:56.766533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.038 [2024-07-14 10:44:56.766586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.038 [2024-07-14 10:44:56.766604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.038 [2024-07-14 10:44:56.766611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.038 [2024-07-14 10:44:56.766617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.038 [2024-07-14 10:44:56.766631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.038 qpair failed and we were unable to recover it. 00:36:12.038 [2024-07-14 10:44:56.776528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.038 [2024-07-14 10:44:56.776591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.038 [2024-07-14 10:44:56.776606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.038 [2024-07-14 10:44:56.776614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.038 [2024-07-14 10:44:56.776620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.038 [2024-07-14 10:44:56.776634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.038 qpair failed and we were unable to recover it. 00:36:12.038 [2024-07-14 10:44:56.786681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.038 [2024-07-14 10:44:56.786743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.038 [2024-07-14 10:44:56.786757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.038 [2024-07-14 10:44:56.786764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.038 [2024-07-14 10:44:56.786771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.038 [2024-07-14 10:44:56.786786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.038 qpair failed and we were unable to recover it. 00:36:12.038 [2024-07-14 10:44:56.796621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.038 [2024-07-14 10:44:56.796680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.038 [2024-07-14 10:44:56.796695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.038 [2024-07-14 10:44:56.796702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.038 [2024-07-14 10:44:56.796709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.038 [2024-07-14 10:44:56.796723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.038 qpair failed and we were unable to recover it. 00:36:12.038 [2024-07-14 10:44:56.806722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.038 [2024-07-14 10:44:56.806777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.038 [2024-07-14 10:44:56.806792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.038 [2024-07-14 10:44:56.806799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.038 [2024-07-14 10:44:56.806812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.038 [2024-07-14 10:44:56.806826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.038 qpair failed and we were unable to recover it. 00:36:12.038 [2024-07-14 10:44:56.816721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.038 [2024-07-14 10:44:56.816779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.038 [2024-07-14 10:44:56.816793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.038 [2024-07-14 10:44:56.816800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.038 [2024-07-14 10:44:56.816807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.038 [2024-07-14 10:44:56.816821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.038 qpair failed and we were unable to recover it. 00:36:12.038 [2024-07-14 10:44:56.826758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.038 [2024-07-14 10:44:56.826837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.038 [2024-07-14 10:44:56.826851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.038 [2024-07-14 10:44:56.826858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.038 [2024-07-14 10:44:56.826865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.038 [2024-07-14 10:44:56.826879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.038 qpair failed and we were unable to recover it. 00:36:12.038 [2024-07-14 10:44:56.836762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.038 [2024-07-14 10:44:56.836818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.038 [2024-07-14 10:44:56.836832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.038 [2024-07-14 10:44:56.836840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.038 [2024-07-14 10:44:56.836846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.038 [2024-07-14 10:44:56.836861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.038 qpair failed and we were unable to recover it. 00:36:12.038 [2024-07-14 10:44:56.846739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.038 [2024-07-14 10:44:56.846797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.038 [2024-07-14 10:44:56.846811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.038 [2024-07-14 10:44:56.846819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.038 [2024-07-14 10:44:56.846826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.038 [2024-07-14 10:44:56.846840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.038 qpair failed and we were unable to recover it. 00:36:12.038 [2024-07-14 10:44:56.856782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.038 [2024-07-14 10:44:56.856841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.038 [2024-07-14 10:44:56.856855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.038 [2024-07-14 10:44:56.856863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.038 [2024-07-14 10:44:56.856869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.038 [2024-07-14 10:44:56.856883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.038 qpair failed and we were unable to recover it. 00:36:12.038 [2024-07-14 10:44:56.866881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.038 [2024-07-14 10:44:56.866940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.038 [2024-07-14 10:44:56.866954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.038 [2024-07-14 10:44:56.866962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.038 [2024-07-14 10:44:56.866968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.038 [2024-07-14 10:44:56.866982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.038 qpair failed and we were unable to recover it. 00:36:12.038 [2024-07-14 10:44:56.876851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.038 [2024-07-14 10:44:56.876905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.038 [2024-07-14 10:44:56.876919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.038 [2024-07-14 10:44:56.876926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.038 [2024-07-14 10:44:56.876932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.038 [2024-07-14 10:44:56.876946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.038 qpair failed and we were unable to recover it. 00:36:12.039 [2024-07-14 10:44:56.886903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.039 [2024-07-14 10:44:56.886966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.039 [2024-07-14 10:44:56.886980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.039 [2024-07-14 10:44:56.886989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.039 [2024-07-14 10:44:56.886995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.039 [2024-07-14 10:44:56.887009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.039 qpair failed and we were unable to recover it. 00:36:12.039 [2024-07-14 10:44:56.896904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.039 [2024-07-14 10:44:56.896964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.039 [2024-07-14 10:44:56.896979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.039 [2024-07-14 10:44:56.896985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.039 [2024-07-14 10:44:56.896995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.039 [2024-07-14 10:44:56.897009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.039 qpair failed and we were unable to recover it. 00:36:12.039 [2024-07-14 10:44:56.907031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.039 [2024-07-14 10:44:56.907093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.039 [2024-07-14 10:44:56.907107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.039 [2024-07-14 10:44:56.907114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.039 [2024-07-14 10:44:56.907120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.039 [2024-07-14 10:44:56.907134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.039 qpair failed and we were unable to recover it. 00:36:12.039 [2024-07-14 10:44:56.917076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.039 [2024-07-14 10:44:56.917132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.039 [2024-07-14 10:44:56.917146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.039 [2024-07-14 10:44:56.917153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.039 [2024-07-14 10:44:56.917159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.039 [2024-07-14 10:44:56.917173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.039 qpair failed and we were unable to recover it. 00:36:12.039 [2024-07-14 10:44:56.927057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.039 [2024-07-14 10:44:56.927112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.039 [2024-07-14 10:44:56.927126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.039 [2024-07-14 10:44:56.927134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.039 [2024-07-14 10:44:56.927140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.039 [2024-07-14 10:44:56.927155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.039 qpair failed and we were unable to recover it. 00:36:12.039 [2024-07-14 10:44:56.937132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.039 [2024-07-14 10:44:56.937209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.039 [2024-07-14 10:44:56.937228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.039 [2024-07-14 10:44:56.937236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.039 [2024-07-14 10:44:56.937242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.039 [2024-07-14 10:44:56.937257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.039 qpair failed and we were unable to recover it. 00:36:12.039 [2024-07-14 10:44:56.947116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.039 [2024-07-14 10:44:56.947170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.039 [2024-07-14 10:44:56.947184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.039 [2024-07-14 10:44:56.947191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.039 [2024-07-14 10:44:56.947197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.039 [2024-07-14 10:44:56.947211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.039 qpair failed and we were unable to recover it. 00:36:12.039 [2024-07-14 10:44:56.957150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.039 [2024-07-14 10:44:56.957206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.039 [2024-07-14 10:44:56.957220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.039 [2024-07-14 10:44:56.957231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.039 [2024-07-14 10:44:56.957238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.039 [2024-07-14 10:44:56.957253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.039 qpair failed and we were unable to recover it. 00:36:12.039 [2024-07-14 10:44:56.967125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.039 [2024-07-14 10:44:56.967179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.039 [2024-07-14 10:44:56.967193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.039 [2024-07-14 10:44:56.967199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.039 [2024-07-14 10:44:56.967206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.039 [2024-07-14 10:44:56.967220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.039 qpair failed and we were unable to recover it. 00:36:12.039 [2024-07-14 10:44:56.977209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.039 [2024-07-14 10:44:56.977271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.039 [2024-07-14 10:44:56.977286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.039 [2024-07-14 10:44:56.977293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.039 [2024-07-14 10:44:56.977299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.039 [2024-07-14 10:44:56.977313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.039 qpair failed and we were unable to recover it. 00:36:12.039 [2024-07-14 10:44:56.987252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.039 [2024-07-14 10:44:56.987310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.039 [2024-07-14 10:44:56.987324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.039 [2024-07-14 10:44:56.987333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.039 [2024-07-14 10:44:56.987339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.039 [2024-07-14 10:44:56.987354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.039 qpair failed and we were unable to recover it. 00:36:12.039 [2024-07-14 10:44:56.997272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.039 [2024-07-14 10:44:56.997327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.039 [2024-07-14 10:44:56.997341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.039 [2024-07-14 10:44:56.997349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.039 [2024-07-14 10:44:56.997355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.039 [2024-07-14 10:44:56.997369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.039 qpair failed and we were unable to recover it. 00:36:12.039 [2024-07-14 10:44:57.007291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.039 [2024-07-14 10:44:57.007350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.039 [2024-07-14 10:44:57.007365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.039 [2024-07-14 10:44:57.007372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.039 [2024-07-14 10:44:57.007378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.039 [2024-07-14 10:44:57.007392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.039 qpair failed and we were unable to recover it. 00:36:12.300 [2024-07-14 10:44:57.017320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.300 [2024-07-14 10:44:57.017387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.300 [2024-07-14 10:44:57.017402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.300 [2024-07-14 10:44:57.017409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.300 [2024-07-14 10:44:57.017415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.300 [2024-07-14 10:44:57.017430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.300 qpair failed and we were unable to recover it. 00:36:12.300 [2024-07-14 10:44:57.027362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.300 [2024-07-14 10:44:57.027419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.300 [2024-07-14 10:44:57.027433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.300 [2024-07-14 10:44:57.027440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.300 [2024-07-14 10:44:57.027446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.300 [2024-07-14 10:44:57.027461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.300 qpair failed and we were unable to recover it. 00:36:12.300 [2024-07-14 10:44:57.037370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.300 [2024-07-14 10:44:57.037426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.300 [2024-07-14 10:44:57.037441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.300 [2024-07-14 10:44:57.037448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.301 [2024-07-14 10:44:57.037454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.301 [2024-07-14 10:44:57.037468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.301 qpair failed and we were unable to recover it. 00:36:12.301 [2024-07-14 10:44:57.047409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.301 [2024-07-14 10:44:57.047462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.301 [2024-07-14 10:44:57.047476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.301 [2024-07-14 10:44:57.047483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.301 [2024-07-14 10:44:57.047489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.301 [2024-07-14 10:44:57.047503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.301 qpair failed and we were unable to recover it. 00:36:12.301 [2024-07-14 10:44:57.057439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.301 [2024-07-14 10:44:57.057540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.301 [2024-07-14 10:44:57.057555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.301 [2024-07-14 10:44:57.057563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.301 [2024-07-14 10:44:57.057570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.301 [2024-07-14 10:44:57.057585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.301 qpair failed and we were unable to recover it. 00:36:12.301 [2024-07-14 10:44:57.067471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.301 [2024-07-14 10:44:57.067531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.301 [2024-07-14 10:44:57.067546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.301 [2024-07-14 10:44:57.067553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.301 [2024-07-14 10:44:57.067559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.301 [2024-07-14 10:44:57.067574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.301 qpair failed and we were unable to recover it. 00:36:12.301 [2024-07-14 10:44:57.077502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.301 [2024-07-14 10:44:57.077558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.301 [2024-07-14 10:44:57.077575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.301 [2024-07-14 10:44:57.077582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.301 [2024-07-14 10:44:57.077588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.301 [2024-07-14 10:44:57.077602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.301 qpair failed and we were unable to recover it. 00:36:12.301 [2024-07-14 10:44:57.087536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.301 [2024-07-14 10:44:57.087592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.301 [2024-07-14 10:44:57.087606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.301 [2024-07-14 10:44:57.087613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.301 [2024-07-14 10:44:57.087619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.301 [2024-07-14 10:44:57.087634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.301 qpair failed and we were unable to recover it. 00:36:12.301 [2024-07-14 10:44:57.097562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.301 [2024-07-14 10:44:57.097619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.301 [2024-07-14 10:44:57.097633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.301 [2024-07-14 10:44:57.097640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.301 [2024-07-14 10:44:57.097647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.301 [2024-07-14 10:44:57.097661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.301 qpair failed and we were unable to recover it. 00:36:12.301 [2024-07-14 10:44:57.107587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.301 [2024-07-14 10:44:57.107645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.301 [2024-07-14 10:44:57.107659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.301 [2024-07-14 10:44:57.107666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.301 [2024-07-14 10:44:57.107672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.301 [2024-07-14 10:44:57.107686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.301 qpair failed and we were unable to recover it. 00:36:12.301 [2024-07-14 10:44:57.117633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.301 [2024-07-14 10:44:57.117688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.301 [2024-07-14 10:44:57.117702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.301 [2024-07-14 10:44:57.117710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.301 [2024-07-14 10:44:57.117716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.301 [2024-07-14 10:44:57.117733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.301 qpair failed and we were unable to recover it. 00:36:12.301 [2024-07-14 10:44:57.127667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.301 [2024-07-14 10:44:57.127719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.301 [2024-07-14 10:44:57.127733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.301 [2024-07-14 10:44:57.127740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.301 [2024-07-14 10:44:57.127746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.301 [2024-07-14 10:44:57.127760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.301 qpair failed and we were unable to recover it. 00:36:12.301 [2024-07-14 10:44:57.137684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.301 [2024-07-14 10:44:57.137737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.301 [2024-07-14 10:44:57.137751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.301 [2024-07-14 10:44:57.137758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.301 [2024-07-14 10:44:57.137765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.301 [2024-07-14 10:44:57.137779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.301 qpair failed and we were unable to recover it. 00:36:12.301 [2024-07-14 10:44:57.147719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.301 [2024-07-14 10:44:57.147779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.301 [2024-07-14 10:44:57.147802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.301 [2024-07-14 10:44:57.147808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.301 [2024-07-14 10:44:57.147815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.301 [2024-07-14 10:44:57.147830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.301 qpair failed and we were unable to recover it. 00:36:12.301 [2024-07-14 10:44:57.157743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.301 [2024-07-14 10:44:57.157804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.301 [2024-07-14 10:44:57.157819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.301 [2024-07-14 10:44:57.157825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.301 [2024-07-14 10:44:57.157831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.301 [2024-07-14 10:44:57.157845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.301 qpair failed and we were unable to recover it. 00:36:12.301 [2024-07-14 10:44:57.167771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.301 [2024-07-14 10:44:57.167826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.301 [2024-07-14 10:44:57.167843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.301 [2024-07-14 10:44:57.167850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.301 [2024-07-14 10:44:57.167855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.301 [2024-07-14 10:44:57.167869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.301 qpair failed and we were unable to recover it. 00:36:12.301 [2024-07-14 10:44:57.177808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.301 [2024-07-14 10:44:57.177865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.301 [2024-07-14 10:44:57.177880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.301 [2024-07-14 10:44:57.177886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.301 [2024-07-14 10:44:57.177893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.301 [2024-07-14 10:44:57.177907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.301 qpair failed and we were unable to recover it. 00:36:12.302 [2024-07-14 10:44:57.187840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.302 [2024-07-14 10:44:57.187897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.302 [2024-07-14 10:44:57.187911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.302 [2024-07-14 10:44:57.187918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.302 [2024-07-14 10:44:57.187924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.302 [2024-07-14 10:44:57.187938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.302 qpair failed and we were unable to recover it. 00:36:12.302 [2024-07-14 10:44:57.197872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.302 [2024-07-14 10:44:57.197933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.302 [2024-07-14 10:44:57.197948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.302 [2024-07-14 10:44:57.197955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.302 [2024-07-14 10:44:57.197961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.302 [2024-07-14 10:44:57.197974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.302 qpair failed and we were unable to recover it. 00:36:12.302 [2024-07-14 10:44:57.207882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.302 [2024-07-14 10:44:57.207942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.302 [2024-07-14 10:44:57.207956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.302 [2024-07-14 10:44:57.207964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.302 [2024-07-14 10:44:57.207970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.302 [2024-07-14 10:44:57.207987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.302 qpair failed and we were unable to recover it. 00:36:12.302 [2024-07-14 10:44:57.217951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.302 [2024-07-14 10:44:57.218010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.302 [2024-07-14 10:44:57.218024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.302 [2024-07-14 10:44:57.218031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.302 [2024-07-14 10:44:57.218037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.302 [2024-07-14 10:44:57.218051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.302 qpair failed and we were unable to recover it. 00:36:12.302 [2024-07-14 10:44:57.227915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.302 [2024-07-14 10:44:57.227974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.302 [2024-07-14 10:44:57.227988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.302 [2024-07-14 10:44:57.227995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.302 [2024-07-14 10:44:57.228001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.302 [2024-07-14 10:44:57.228015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.302 qpair failed and we were unable to recover it. 00:36:12.302 [2024-07-14 10:44:57.237975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.302 [2024-07-14 10:44:57.238025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.302 [2024-07-14 10:44:57.238040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.302 [2024-07-14 10:44:57.238047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.302 [2024-07-14 10:44:57.238054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.302 [2024-07-14 10:44:57.238069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.302 qpair failed and we were unable to recover it. 00:36:12.302 [2024-07-14 10:44:57.248041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.302 [2024-07-14 10:44:57.248145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.302 [2024-07-14 10:44:57.248159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.302 [2024-07-14 10:44:57.248167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.302 [2024-07-14 10:44:57.248174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.302 [2024-07-14 10:44:57.248189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.302 qpair failed and we were unable to recover it. 00:36:12.302 [2024-07-14 10:44:57.258040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.302 [2024-07-14 10:44:57.258100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.302 [2024-07-14 10:44:57.258115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.302 [2024-07-14 10:44:57.258123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.302 [2024-07-14 10:44:57.258130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.302 [2024-07-14 10:44:57.258144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.302 qpair failed and we were unable to recover it. 00:36:12.302 [2024-07-14 10:44:57.268076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.302 [2024-07-14 10:44:57.268139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.302 [2024-07-14 10:44:57.268153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.302 [2024-07-14 10:44:57.268160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.302 [2024-07-14 10:44:57.268166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.302 [2024-07-14 10:44:57.268180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.302 qpair failed and we were unable to recover it. 00:36:12.302 [2024-07-14 10:44:57.278091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.302 [2024-07-14 10:44:57.278156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.302 [2024-07-14 10:44:57.278171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.302 [2024-07-14 10:44:57.278178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.302 [2024-07-14 10:44:57.278184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.302 [2024-07-14 10:44:57.278198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.302 qpair failed and we were unable to recover it. 00:36:12.563 [2024-07-14 10:44:57.288111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.563 [2024-07-14 10:44:57.288194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.563 [2024-07-14 10:44:57.288209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.563 [2024-07-14 10:44:57.288216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.563 [2024-07-14 10:44:57.288228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.563 [2024-07-14 10:44:57.288244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-07-14 10:44:57.298146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.563 [2024-07-14 10:44:57.298203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.563 [2024-07-14 10:44:57.298217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.563 [2024-07-14 10:44:57.298227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.563 [2024-07-14 10:44:57.298237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.563 [2024-07-14 10:44:57.298251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-07-14 10:44:57.308163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.563 [2024-07-14 10:44:57.308215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.563 [2024-07-14 10:44:57.308233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.563 [2024-07-14 10:44:57.308241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.563 [2024-07-14 10:44:57.308247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.563 [2024-07-14 10:44:57.308262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-07-14 10:44:57.318192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.563 [2024-07-14 10:44:57.318249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.563 [2024-07-14 10:44:57.318264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.563 [2024-07-14 10:44:57.318271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.563 [2024-07-14 10:44:57.318277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.563 [2024-07-14 10:44:57.318292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-07-14 10:44:57.328243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.563 [2024-07-14 10:44:57.328300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.563 [2024-07-14 10:44:57.328314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.563 [2024-07-14 10:44:57.328321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.563 [2024-07-14 10:44:57.328327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.563 [2024-07-14 10:44:57.328341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-07-14 10:44:57.338270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.563 [2024-07-14 10:44:57.338325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.563 [2024-07-14 10:44:57.338339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.563 [2024-07-14 10:44:57.338347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.563 [2024-07-14 10:44:57.338354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.563 [2024-07-14 10:44:57.338368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-07-14 10:44:57.348282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.563 [2024-07-14 10:44:57.348338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.563 [2024-07-14 10:44:57.348353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.563 [2024-07-14 10:44:57.348360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.563 [2024-07-14 10:44:57.348366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.563 [2024-07-14 10:44:57.348380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-07-14 10:44:57.358309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.563 [2024-07-14 10:44:57.358364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.563 [2024-07-14 10:44:57.358379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.563 [2024-07-14 10:44:57.358386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.563 [2024-07-14 10:44:57.358392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.563 [2024-07-14 10:44:57.358407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-07-14 10:44:57.368284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.563 [2024-07-14 10:44:57.368340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.563 [2024-07-14 10:44:57.368355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.563 [2024-07-14 10:44:57.368362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.563 [2024-07-14 10:44:57.368368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.563 [2024-07-14 10:44:57.368383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-07-14 10:44:57.378380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.563 [2024-07-14 10:44:57.378434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.563 [2024-07-14 10:44:57.378449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.563 [2024-07-14 10:44:57.378457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.563 [2024-07-14 10:44:57.378464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.563 [2024-07-14 10:44:57.378479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-07-14 10:44:57.388402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.563 [2024-07-14 10:44:57.388459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.563 [2024-07-14 10:44:57.388473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.563 [2024-07-14 10:44:57.388484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.563 [2024-07-14 10:44:57.388490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.563 [2024-07-14 10:44:57.388504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.563 qpair failed and we were unable to recover it. 00:36:12.563 [2024-07-14 10:44:57.398426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.564 [2024-07-14 10:44:57.398512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.564 [2024-07-14 10:44:57.398526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.564 [2024-07-14 10:44:57.398533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.564 [2024-07-14 10:44:57.398539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.564 [2024-07-14 10:44:57.398554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-07-14 10:44:57.408463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.564 [2024-07-14 10:44:57.408521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.564 [2024-07-14 10:44:57.408538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.564 [2024-07-14 10:44:57.408545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.564 [2024-07-14 10:44:57.408551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.564 [2024-07-14 10:44:57.408566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-07-14 10:44:57.418502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.564 [2024-07-14 10:44:57.418559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.564 [2024-07-14 10:44:57.418573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.564 [2024-07-14 10:44:57.418580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.564 [2024-07-14 10:44:57.418587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.564 [2024-07-14 10:44:57.418601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-07-14 10:44:57.428509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.564 [2024-07-14 10:44:57.428567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.564 [2024-07-14 10:44:57.428581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.564 [2024-07-14 10:44:57.428588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.564 [2024-07-14 10:44:57.428594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.564 [2024-07-14 10:44:57.428608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-07-14 10:44:57.438551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.564 [2024-07-14 10:44:57.438628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.564 [2024-07-14 10:44:57.438643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.564 [2024-07-14 10:44:57.438650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.564 [2024-07-14 10:44:57.438656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.564 [2024-07-14 10:44:57.438670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-07-14 10:44:57.448566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.564 [2024-07-14 10:44:57.448618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.564 [2024-07-14 10:44:57.448633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.564 [2024-07-14 10:44:57.448639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.564 [2024-07-14 10:44:57.448646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.564 [2024-07-14 10:44:57.448659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-07-14 10:44:57.458605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.564 [2024-07-14 10:44:57.458665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.564 [2024-07-14 10:44:57.458679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.564 [2024-07-14 10:44:57.458686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.564 [2024-07-14 10:44:57.458692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.564 [2024-07-14 10:44:57.458706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-07-14 10:44:57.468628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.564 [2024-07-14 10:44:57.468687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.564 [2024-07-14 10:44:57.468701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.564 [2024-07-14 10:44:57.468708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.564 [2024-07-14 10:44:57.468714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.564 [2024-07-14 10:44:57.468728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-07-14 10:44:57.478657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.564 [2024-07-14 10:44:57.478712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.564 [2024-07-14 10:44:57.478729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.564 [2024-07-14 10:44:57.478736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.564 [2024-07-14 10:44:57.478743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.564 [2024-07-14 10:44:57.478757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-07-14 10:44:57.488681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.564 [2024-07-14 10:44:57.488736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.564 [2024-07-14 10:44:57.488750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.564 [2024-07-14 10:44:57.488757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.564 [2024-07-14 10:44:57.488763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.564 [2024-07-14 10:44:57.488777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-07-14 10:44:57.498698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.564 [2024-07-14 10:44:57.498751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.564 [2024-07-14 10:44:57.498765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.564 [2024-07-14 10:44:57.498771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.564 [2024-07-14 10:44:57.498778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.564 [2024-07-14 10:44:57.498792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-07-14 10:44:57.508741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.564 [2024-07-14 10:44:57.508795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.564 [2024-07-14 10:44:57.508809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.564 [2024-07-14 10:44:57.508816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.564 [2024-07-14 10:44:57.508822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.564 [2024-07-14 10:44:57.508836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-07-14 10:44:57.518779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.564 [2024-07-14 10:44:57.518835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.564 [2024-07-14 10:44:57.518849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.564 [2024-07-14 10:44:57.518857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.564 [2024-07-14 10:44:57.518863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.564 [2024-07-14 10:44:57.518877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-07-14 10:44:57.528857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.564 [2024-07-14 10:44:57.528911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.564 [2024-07-14 10:44:57.528926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.564 [2024-07-14 10:44:57.528933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.564 [2024-07-14 10:44:57.528939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.564 [2024-07-14 10:44:57.528953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.564 qpair failed and we were unable to recover it. 00:36:12.564 [2024-07-14 10:44:57.538823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.564 [2024-07-14 10:44:57.538882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.564 [2024-07-14 10:44:57.538896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.565 [2024-07-14 10:44:57.538903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.565 [2024-07-14 10:44:57.538909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.565 [2024-07-14 10:44:57.538923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.565 qpair failed and we were unable to recover it. 00:36:12.825 [2024-07-14 10:44:57.548870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.825 [2024-07-14 10:44:57.548930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.825 [2024-07-14 10:44:57.548945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.826 [2024-07-14 10:44:57.548952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.826 [2024-07-14 10:44:57.548958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.826 [2024-07-14 10:44:57.548973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.826 qpair failed and we were unable to recover it. 00:36:12.826 [2024-07-14 10:44:57.558871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.826 [2024-07-14 10:44:57.558925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.826 [2024-07-14 10:44:57.558940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.826 [2024-07-14 10:44:57.558946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.826 [2024-07-14 10:44:57.558953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.826 [2024-07-14 10:44:57.558967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.826 qpair failed and we were unable to recover it. 00:36:12.826 [2024-07-14 10:44:57.568927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.826 [2024-07-14 10:44:57.568975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.826 [2024-07-14 10:44:57.568996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.826 [2024-07-14 10:44:57.569004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.826 [2024-07-14 10:44:57.569010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.826 [2024-07-14 10:44:57.569024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.826 qpair failed and we were unable to recover it. 00:36:12.826 [2024-07-14 10:44:57.578936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.826 [2024-07-14 10:44:57.578996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.826 [2024-07-14 10:44:57.579010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.826 [2024-07-14 10:44:57.579018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.826 [2024-07-14 10:44:57.579023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.826 [2024-07-14 10:44:57.579038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.826 qpair failed and we were unable to recover it. 00:36:12.826 [2024-07-14 10:44:57.589006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.826 [2024-07-14 10:44:57.589068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.826 [2024-07-14 10:44:57.589082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.826 [2024-07-14 10:44:57.589089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.826 [2024-07-14 10:44:57.589095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.826 [2024-07-14 10:44:57.589109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.826 qpair failed and we were unable to recover it. 00:36:12.826 [2024-07-14 10:44:57.598983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.826 [2024-07-14 10:44:57.599034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.826 [2024-07-14 10:44:57.599048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.826 [2024-07-14 10:44:57.599054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.826 [2024-07-14 10:44:57.599061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.826 [2024-07-14 10:44:57.599076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.826 qpair failed and we were unable to recover it. 00:36:12.826 [2024-07-14 10:44:57.609025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.826 [2024-07-14 10:44:57.609078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.826 [2024-07-14 10:44:57.609095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.826 [2024-07-14 10:44:57.609102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.826 [2024-07-14 10:44:57.609109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.826 [2024-07-14 10:44:57.609127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.826 qpair failed and we were unable to recover it. 00:36:12.826 [2024-07-14 10:44:57.618983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.826 [2024-07-14 10:44:57.619038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.826 [2024-07-14 10:44:57.619053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.826 [2024-07-14 10:44:57.619061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.826 [2024-07-14 10:44:57.619067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.826 [2024-07-14 10:44:57.619081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.826 qpair failed and we were unable to recover it. 00:36:12.826 [2024-07-14 10:44:57.629080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.826 [2024-07-14 10:44:57.629138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.826 [2024-07-14 10:44:57.629153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.826 [2024-07-14 10:44:57.629159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.826 [2024-07-14 10:44:57.629166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.826 [2024-07-14 10:44:57.629179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.826 qpair failed and we were unable to recover it. 00:36:12.826 [2024-07-14 10:44:57.639095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.826 [2024-07-14 10:44:57.639151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.826 [2024-07-14 10:44:57.639165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.826 [2024-07-14 10:44:57.639172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.826 [2024-07-14 10:44:57.639178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.826 [2024-07-14 10:44:57.639192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.826 qpair failed and we were unable to recover it. 00:36:12.826 [2024-07-14 10:44:57.649136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.826 [2024-07-14 10:44:57.649194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.826 [2024-07-14 10:44:57.649208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.826 [2024-07-14 10:44:57.649215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.826 [2024-07-14 10:44:57.649221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.826 [2024-07-14 10:44:57.649238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.826 qpair failed and we were unable to recover it. 00:36:12.826 [2024-07-14 10:44:57.659143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.826 [2024-07-14 10:44:57.659197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.826 [2024-07-14 10:44:57.659214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.826 [2024-07-14 10:44:57.659222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.826 [2024-07-14 10:44:57.659232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.826 [2024-07-14 10:44:57.659246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.826 qpair failed and we were unable to recover it. 00:36:12.826 [2024-07-14 10:44:57.669176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.826 [2024-07-14 10:44:57.669236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.826 [2024-07-14 10:44:57.669251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.826 [2024-07-14 10:44:57.669258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.826 [2024-07-14 10:44:57.669264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.826 [2024-07-14 10:44:57.669278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.826 qpair failed and we were unable to recover it. 00:36:12.826 [2024-07-14 10:44:57.679206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.826 [2024-07-14 10:44:57.679266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.826 [2024-07-14 10:44:57.679280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.826 [2024-07-14 10:44:57.679287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.826 [2024-07-14 10:44:57.679293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.826 [2024-07-14 10:44:57.679307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.826 qpair failed and we were unable to recover it. 00:36:12.826 [2024-07-14 10:44:57.689233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.826 [2024-07-14 10:44:57.689286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.826 [2024-07-14 10:44:57.689300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.826 [2024-07-14 10:44:57.689306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.826 [2024-07-14 10:44:57.689313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.827 [2024-07-14 10:44:57.689327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.827 qpair failed and we were unable to recover it. 00:36:12.827 [2024-07-14 10:44:57.699313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.827 [2024-07-14 10:44:57.699382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.827 [2024-07-14 10:44:57.699397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.827 [2024-07-14 10:44:57.699405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.827 [2024-07-14 10:44:57.699414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.827 [2024-07-14 10:44:57.699429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.827 qpair failed and we were unable to recover it. 00:36:12.827 [2024-07-14 10:44:57.709294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.827 [2024-07-14 10:44:57.709353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.827 [2024-07-14 10:44:57.709369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.827 [2024-07-14 10:44:57.709375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.827 [2024-07-14 10:44:57.709383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.827 [2024-07-14 10:44:57.709397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.827 qpair failed and we were unable to recover it. 00:36:12.827 [2024-07-14 10:44:57.719321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.827 [2024-07-14 10:44:57.719377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.827 [2024-07-14 10:44:57.719390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.827 [2024-07-14 10:44:57.719397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.827 [2024-07-14 10:44:57.719404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.827 [2024-07-14 10:44:57.719417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.827 qpair failed and we were unable to recover it. 00:36:12.827 [2024-07-14 10:44:57.729347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.827 [2024-07-14 10:44:57.729401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.827 [2024-07-14 10:44:57.729415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.827 [2024-07-14 10:44:57.729422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.827 [2024-07-14 10:44:57.729428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.827 [2024-07-14 10:44:57.729441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.827 qpair failed and we were unable to recover it. 00:36:12.827 [2024-07-14 10:44:57.739402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.827 [2024-07-14 10:44:57.739486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.827 [2024-07-14 10:44:57.739501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.827 [2024-07-14 10:44:57.739508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.827 [2024-07-14 10:44:57.739515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.827 [2024-07-14 10:44:57.739529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.827 qpair failed and we were unable to recover it. 00:36:12.827 [2024-07-14 10:44:57.749416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.827 [2024-07-14 10:44:57.749480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.827 [2024-07-14 10:44:57.749495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.827 [2024-07-14 10:44:57.749502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.827 [2024-07-14 10:44:57.749508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.827 [2024-07-14 10:44:57.749523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.827 qpair failed and we were unable to recover it. 00:36:12.827 [2024-07-14 10:44:57.759471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.827 [2024-07-14 10:44:57.759530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.827 [2024-07-14 10:44:57.759545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.827 [2024-07-14 10:44:57.759552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.827 [2024-07-14 10:44:57.759558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.827 [2024-07-14 10:44:57.759572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.827 qpair failed and we were unable to recover it. 00:36:12.827 [2024-07-14 10:44:57.769488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.827 [2024-07-14 10:44:57.769543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.827 [2024-07-14 10:44:57.769557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.827 [2024-07-14 10:44:57.769565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.827 [2024-07-14 10:44:57.769571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.827 [2024-07-14 10:44:57.769585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.827 qpair failed and we were unable to recover it. 00:36:12.827 [2024-07-14 10:44:57.779512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.827 [2024-07-14 10:44:57.779565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.827 [2024-07-14 10:44:57.779579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.827 [2024-07-14 10:44:57.779586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.827 [2024-07-14 10:44:57.779593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.827 [2024-07-14 10:44:57.779607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.827 qpair failed and we were unable to recover it. 00:36:12.827 [2024-07-14 10:44:57.789528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.827 [2024-07-14 10:44:57.789586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.827 [2024-07-14 10:44:57.789600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.827 [2024-07-14 10:44:57.789611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.827 [2024-07-14 10:44:57.789617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.827 [2024-07-14 10:44:57.789632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.827 qpair failed and we were unable to recover it. 00:36:12.827 [2024-07-14 10:44:57.799555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.827 [2024-07-14 10:44:57.799631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.827 [2024-07-14 10:44:57.799645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.827 [2024-07-14 10:44:57.799652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.827 [2024-07-14 10:44:57.799658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:12.827 [2024-07-14 10:44:57.799672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.827 qpair failed and we were unable to recover it. 00:36:13.088 [2024-07-14 10:44:57.809583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.088 [2024-07-14 10:44:57.809634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.088 [2024-07-14 10:44:57.809648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.088 [2024-07-14 10:44:57.809656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.088 [2024-07-14 10:44:57.809662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.088 [2024-07-14 10:44:57.809676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-07-14 10:44:57.819626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.088 [2024-07-14 10:44:57.819685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.088 [2024-07-14 10:44:57.819700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.088 [2024-07-14 10:44:57.819707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.088 [2024-07-14 10:44:57.819713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.088 [2024-07-14 10:44:57.819727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-07-14 10:44:57.829640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.088 [2024-07-14 10:44:57.829698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.088 [2024-07-14 10:44:57.829713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.088 [2024-07-14 10:44:57.829719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.088 [2024-07-14 10:44:57.829726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.088 [2024-07-14 10:44:57.829741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-07-14 10:44:57.839650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.088 [2024-07-14 10:44:57.839703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.088 [2024-07-14 10:44:57.839717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.088 [2024-07-14 10:44:57.839725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.088 [2024-07-14 10:44:57.839731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.088 [2024-07-14 10:44:57.839745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-07-14 10:44:57.849747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.088 [2024-07-14 10:44:57.849805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.088 [2024-07-14 10:44:57.849819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.088 [2024-07-14 10:44:57.849826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.088 [2024-07-14 10:44:57.849832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.088 [2024-07-14 10:44:57.849847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-07-14 10:44:57.859776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.088 [2024-07-14 10:44:57.859856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.088 [2024-07-14 10:44:57.859870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.088 [2024-07-14 10:44:57.859877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.088 [2024-07-14 10:44:57.859884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.088 [2024-07-14 10:44:57.859898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-07-14 10:44:57.869796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.088 [2024-07-14 10:44:57.869857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.088 [2024-07-14 10:44:57.869871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.088 [2024-07-14 10:44:57.869878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.088 [2024-07-14 10:44:57.869885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.088 [2024-07-14 10:44:57.869898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-07-14 10:44:57.879797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.088 [2024-07-14 10:44:57.879879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.088 [2024-07-14 10:44:57.879893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.088 [2024-07-14 10:44:57.879902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.088 [2024-07-14 10:44:57.879908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.088 [2024-07-14 10:44:57.879922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-07-14 10:44:57.889820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.088 [2024-07-14 10:44:57.889872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.088 [2024-07-14 10:44:57.889886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.088 [2024-07-14 10:44:57.889892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.088 [2024-07-14 10:44:57.889898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.088 [2024-07-14 10:44:57.889912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-07-14 10:44:57.899861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.088 [2024-07-14 10:44:57.899916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.088 [2024-07-14 10:44:57.899930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.088 [2024-07-14 10:44:57.899936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.088 [2024-07-14 10:44:57.899943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.088 [2024-07-14 10:44:57.899957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-07-14 10:44:57.909823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.088 [2024-07-14 10:44:57.909881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.088 [2024-07-14 10:44:57.909895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.088 [2024-07-14 10:44:57.909902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.088 [2024-07-14 10:44:57.909908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.089 [2024-07-14 10:44:57.909922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-07-14 10:44:57.919944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.089 [2024-07-14 10:44:57.920002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.089 [2024-07-14 10:44:57.920016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.089 [2024-07-14 10:44:57.920023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.089 [2024-07-14 10:44:57.920030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.089 [2024-07-14 10:44:57.920044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-07-14 10:44:57.929864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.089 [2024-07-14 10:44:57.929920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.089 [2024-07-14 10:44:57.929934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.089 [2024-07-14 10:44:57.929941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.089 [2024-07-14 10:44:57.929947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.089 [2024-07-14 10:44:57.929960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-07-14 10:44:57.939944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.089 [2024-07-14 10:44:57.940001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.089 [2024-07-14 10:44:57.940018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.089 [2024-07-14 10:44:57.940024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.089 [2024-07-14 10:44:57.940031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.089 [2024-07-14 10:44:57.940046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-07-14 10:44:57.949984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.089 [2024-07-14 10:44:57.950042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.089 [2024-07-14 10:44:57.950056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.089 [2024-07-14 10:44:57.950063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.089 [2024-07-14 10:44:57.950070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.089 [2024-07-14 10:44:57.950085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-07-14 10:44:57.960024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.089 [2024-07-14 10:44:57.960074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.089 [2024-07-14 10:44:57.960088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.089 [2024-07-14 10:44:57.960095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.089 [2024-07-14 10:44:57.960101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.089 [2024-07-14 10:44:57.960116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-07-14 10:44:57.970024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.089 [2024-07-14 10:44:57.970074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.089 [2024-07-14 10:44:57.970092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.089 [2024-07-14 10:44:57.970099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.089 [2024-07-14 10:44:57.970105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.089 [2024-07-14 10:44:57.970119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-07-14 10:44:57.980082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.089 [2024-07-14 10:44:57.980137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.089 [2024-07-14 10:44:57.980152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.089 [2024-07-14 10:44:57.980158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.089 [2024-07-14 10:44:57.980165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.089 [2024-07-14 10:44:57.980179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-07-14 10:44:57.990108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.089 [2024-07-14 10:44:57.990168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.089 [2024-07-14 10:44:57.990185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.089 [2024-07-14 10:44:57.990193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.089 [2024-07-14 10:44:57.990199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.089 [2024-07-14 10:44:57.990214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-07-14 10:44:58.000170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.089 [2024-07-14 10:44:58.000234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.089 [2024-07-14 10:44:58.000249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.089 [2024-07-14 10:44:58.000256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.089 [2024-07-14 10:44:58.000262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.089 [2024-07-14 10:44:58.000276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-07-14 10:44:58.010194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.089 [2024-07-14 10:44:58.010301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.089 [2024-07-14 10:44:58.010318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.089 [2024-07-14 10:44:58.010325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.089 [2024-07-14 10:44:58.010332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.089 [2024-07-14 10:44:58.010350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-07-14 10:44:58.020131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.089 [2024-07-14 10:44:58.020189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.089 [2024-07-14 10:44:58.020204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.089 [2024-07-14 10:44:58.020213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.089 [2024-07-14 10:44:58.020220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.089 [2024-07-14 10:44:58.020239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-07-14 10:44:58.030146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.089 [2024-07-14 10:44:58.030214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.089 [2024-07-14 10:44:58.030235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.089 [2024-07-14 10:44:58.030244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.089 [2024-07-14 10:44:58.030251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.089 [2024-07-14 10:44:58.030266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-07-14 10:44:58.040271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.089 [2024-07-14 10:44:58.040342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.089 [2024-07-14 10:44:58.040358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.089 [2024-07-14 10:44:58.040365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.089 [2024-07-14 10:44:58.040372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.089 [2024-07-14 10:44:58.040386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-07-14 10:44:58.050261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.089 [2024-07-14 10:44:58.050326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.089 [2024-07-14 10:44:58.050340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.089 [2024-07-14 10:44:58.050346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.089 [2024-07-14 10:44:58.050352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.089 [2024-07-14 10:44:58.050367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-07-14 10:44:58.060321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.090 [2024-07-14 10:44:58.060377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.090 [2024-07-14 10:44:58.060394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.090 [2024-07-14 10:44:58.060400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.090 [2024-07-14 10:44:58.060406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.090 [2024-07-14 10:44:58.060420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.350 [2024-07-14 10:44:58.070285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.350 [2024-07-14 10:44:58.070343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.350 [2024-07-14 10:44:58.070357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.350 [2024-07-14 10:44:58.070364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.350 [2024-07-14 10:44:58.070370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.350 [2024-07-14 10:44:58.070384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.350 qpair failed and we were unable to recover it. 00:36:13.350 [2024-07-14 10:44:58.080368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.350 [2024-07-14 10:44:58.080419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.350 [2024-07-14 10:44:58.080433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.350 [2024-07-14 10:44:58.080440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.350 [2024-07-14 10:44:58.080446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.350 [2024-07-14 10:44:58.080459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.350 qpair failed and we were unable to recover it. 00:36:13.350 [2024-07-14 10:44:58.090404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.350 [2024-07-14 10:44:58.090460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.350 [2024-07-14 10:44:58.090475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.350 [2024-07-14 10:44:58.090481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.350 [2024-07-14 10:44:58.090487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.350 [2024-07-14 10:44:58.090503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.350 qpair failed and we were unable to recover it. 00:36:13.350 [2024-07-14 10:44:58.100455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.350 [2024-07-14 10:44:58.100559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.350 [2024-07-14 10:44:58.100573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.350 [2024-07-14 10:44:58.100581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.350 [2024-07-14 10:44:58.100592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.350 [2024-07-14 10:44:58.100607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.350 qpair failed and we were unable to recover it. 00:36:13.350 [2024-07-14 10:44:58.110453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.350 [2024-07-14 10:44:58.110510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.350 [2024-07-14 10:44:58.110525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.350 [2024-07-14 10:44:58.110533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.350 [2024-07-14 10:44:58.110539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.350 [2024-07-14 10:44:58.110553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.350 qpair failed and we were unable to recover it. 00:36:13.350 [2024-07-14 10:44:58.120470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.350 [2024-07-14 10:44:58.120581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.350 [2024-07-14 10:44:58.120596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.350 [2024-07-14 10:44:58.120603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.350 [2024-07-14 10:44:58.120609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.350 [2024-07-14 10:44:58.120625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.350 qpair failed and we were unable to recover it. 00:36:13.350 [2024-07-14 10:44:58.130508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.350 [2024-07-14 10:44:58.130559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.350 [2024-07-14 10:44:58.130573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.350 [2024-07-14 10:44:58.130580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.350 [2024-07-14 10:44:58.130587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.350 [2024-07-14 10:44:58.130600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.350 qpair failed and we were unable to recover it. 00:36:13.350 [2024-07-14 10:44:58.140540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.350 [2024-07-14 10:44:58.140629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.350 [2024-07-14 10:44:58.140643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.351 [2024-07-14 10:44:58.140650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.351 [2024-07-14 10:44:58.140656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.351 [2024-07-14 10:44:58.140669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.351 qpair failed and we were unable to recover it. 00:36:13.351 [2024-07-14 10:44:58.150603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.351 [2024-07-14 10:44:58.150667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.351 [2024-07-14 10:44:58.150681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.351 [2024-07-14 10:44:58.150688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.351 [2024-07-14 10:44:58.150694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.351 [2024-07-14 10:44:58.150708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.351 qpair failed and we were unable to recover it. 00:36:13.351 [2024-07-14 10:44:58.160598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.351 [2024-07-14 10:44:58.160651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.351 [2024-07-14 10:44:58.160665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.351 [2024-07-14 10:44:58.160671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.351 [2024-07-14 10:44:58.160678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.351 [2024-07-14 10:44:58.160692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.351 qpair failed and we were unable to recover it. 00:36:13.351 [2024-07-14 10:44:58.170626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.351 [2024-07-14 10:44:58.170710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.351 [2024-07-14 10:44:58.170725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.351 [2024-07-14 10:44:58.170732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.351 [2024-07-14 10:44:58.170737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.351 [2024-07-14 10:44:58.170752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.351 qpair failed and we were unable to recover it. 00:36:13.351 [2024-07-14 10:44:58.180646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.351 [2024-07-14 10:44:58.180715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.351 [2024-07-14 10:44:58.180728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.351 [2024-07-14 10:44:58.180736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.351 [2024-07-14 10:44:58.180742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.351 [2024-07-14 10:44:58.180757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.351 qpair failed and we were unable to recover it. 00:36:13.351 [2024-07-14 10:44:58.190698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.351 [2024-07-14 10:44:58.190785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.351 [2024-07-14 10:44:58.190799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.351 [2024-07-14 10:44:58.190809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.351 [2024-07-14 10:44:58.190815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.351 [2024-07-14 10:44:58.190830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.351 qpair failed and we were unable to recover it. 00:36:13.351 [2024-07-14 10:44:58.200716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.351 [2024-07-14 10:44:58.200773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.351 [2024-07-14 10:44:58.200788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.351 [2024-07-14 10:44:58.200795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.351 [2024-07-14 10:44:58.200801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.351 [2024-07-14 10:44:58.200816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.351 qpair failed and we were unable to recover it. 00:36:13.351 [2024-07-14 10:44:58.210748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.351 [2024-07-14 10:44:58.210802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.351 [2024-07-14 10:44:58.210816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.351 [2024-07-14 10:44:58.210823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.351 [2024-07-14 10:44:58.210830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.351 [2024-07-14 10:44:58.210844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.351 qpair failed and we were unable to recover it. 00:36:13.351 [2024-07-14 10:44:58.220730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.351 [2024-07-14 10:44:58.220819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.351 [2024-07-14 10:44:58.220833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.351 [2024-07-14 10:44:58.220840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.351 [2024-07-14 10:44:58.220846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.351 [2024-07-14 10:44:58.220861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.351 qpair failed and we were unable to recover it. 00:36:13.351 [2024-07-14 10:44:58.230824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.351 [2024-07-14 10:44:58.230882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.351 [2024-07-14 10:44:58.230896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.351 [2024-07-14 10:44:58.230903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.351 [2024-07-14 10:44:58.230910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.351 [2024-07-14 10:44:58.230925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.351 qpair failed and we were unable to recover it. 00:36:13.351 [2024-07-14 10:44:58.240835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.351 [2024-07-14 10:44:58.240888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.351 [2024-07-14 10:44:58.240902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.351 [2024-07-14 10:44:58.240909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.351 [2024-07-14 10:44:58.240915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.351 [2024-07-14 10:44:58.240930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.351 qpair failed and we were unable to recover it. 00:36:13.351 [2024-07-14 10:44:58.250865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.351 [2024-07-14 10:44:58.250917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.351 [2024-07-14 10:44:58.250932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.351 [2024-07-14 10:44:58.250939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.351 [2024-07-14 10:44:58.250945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.351 [2024-07-14 10:44:58.250959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.351 qpair failed and we were unable to recover it. 00:36:13.351 [2024-07-14 10:44:58.260839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.351 [2024-07-14 10:44:58.260896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.351 [2024-07-14 10:44:58.260910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.351 [2024-07-14 10:44:58.260917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.351 [2024-07-14 10:44:58.260924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.351 [2024-07-14 10:44:58.260937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.351 qpair failed and we were unable to recover it. 00:36:13.351 [2024-07-14 10:44:58.270920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.351 [2024-07-14 10:44:58.270978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.351 [2024-07-14 10:44:58.270992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.351 [2024-07-14 10:44:58.270999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.351 [2024-07-14 10:44:58.271005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.351 [2024-07-14 10:44:58.271019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.351 qpair failed and we were unable to recover it. 00:36:13.351 [2024-07-14 10:44:58.281001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.351 [2024-07-14 10:44:58.281055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.351 [2024-07-14 10:44:58.281069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.351 [2024-07-14 10:44:58.281079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.351 [2024-07-14 10:44:58.281085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.352 [2024-07-14 10:44:58.281099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.352 qpair failed and we were unable to recover it. 00:36:13.352 [2024-07-14 10:44:58.290911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.352 [2024-07-14 10:44:58.290965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.352 [2024-07-14 10:44:58.290980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.352 [2024-07-14 10:44:58.290987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.352 [2024-07-14 10:44:58.290994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.352 [2024-07-14 10:44:58.291008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.352 qpair failed and we were unable to recover it. 00:36:13.352 [2024-07-14 10:44:58.300999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.352 [2024-07-14 10:44:58.301052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.352 [2024-07-14 10:44:58.301066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.352 [2024-07-14 10:44:58.301072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.352 [2024-07-14 10:44:58.301079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.352 [2024-07-14 10:44:58.301092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.352 qpair failed and we were unable to recover it. 00:36:13.352 [2024-07-14 10:44:58.311016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.352 [2024-07-14 10:44:58.311085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.352 [2024-07-14 10:44:58.311100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.352 [2024-07-14 10:44:58.311106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.352 [2024-07-14 10:44:58.311112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.352 [2024-07-14 10:44:58.311127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.352 qpair failed and we were unable to recover it. 00:36:13.352 [2024-07-14 10:44:58.321095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.352 [2024-07-14 10:44:58.321162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.352 [2024-07-14 10:44:58.321176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.352 [2024-07-14 10:44:58.321183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.352 [2024-07-14 10:44:58.321189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.352 [2024-07-14 10:44:58.321204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.352 qpair failed and we were unable to recover it. 00:36:13.612 [2024-07-14 10:44:58.331034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.612 [2024-07-14 10:44:58.331089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.612 [2024-07-14 10:44:58.331104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.612 [2024-07-14 10:44:58.331111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.612 [2024-07-14 10:44:58.331117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.612 [2024-07-14 10:44:58.331131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-07-14 10:44:58.341120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.612 [2024-07-14 10:44:58.341196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.612 [2024-07-14 10:44:58.341210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.612 [2024-07-14 10:44:58.341217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.612 [2024-07-14 10:44:58.341223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.612 [2024-07-14 10:44:58.341242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-07-14 10:44:58.351092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.612 [2024-07-14 10:44:58.351151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.612 [2024-07-14 10:44:58.351166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.612 [2024-07-14 10:44:58.351173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.612 [2024-07-14 10:44:58.351179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.613 [2024-07-14 10:44:58.351192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-07-14 10:44:58.361194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.613 [2024-07-14 10:44:58.361261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.613 [2024-07-14 10:44:58.361275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.613 [2024-07-14 10:44:58.361281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.613 [2024-07-14 10:44:58.361287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.613 [2024-07-14 10:44:58.361301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-07-14 10:44:58.371211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.613 [2024-07-14 10:44:58.371282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.613 [2024-07-14 10:44:58.371300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.613 [2024-07-14 10:44:58.371307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.613 [2024-07-14 10:44:58.371313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.613 [2024-07-14 10:44:58.371327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-07-14 10:44:58.381259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.613 [2024-07-14 10:44:58.381317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.613 [2024-07-14 10:44:58.381332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.613 [2024-07-14 10:44:58.381339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.613 [2024-07-14 10:44:58.381346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.613 [2024-07-14 10:44:58.381360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-07-14 10:44:58.391227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.613 [2024-07-14 10:44:58.391281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.613 [2024-07-14 10:44:58.391295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.613 [2024-07-14 10:44:58.391302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.613 [2024-07-14 10:44:58.391308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.613 [2024-07-14 10:44:58.391322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-07-14 10:44:58.401251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.613 [2024-07-14 10:44:58.401307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.613 [2024-07-14 10:44:58.401321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.613 [2024-07-14 10:44:58.401328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.613 [2024-07-14 10:44:58.401334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.613 [2024-07-14 10:44:58.401348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-07-14 10:44:58.411399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.613 [2024-07-14 10:44:58.411456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.613 [2024-07-14 10:44:58.411470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.613 [2024-07-14 10:44:58.411477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.613 [2024-07-14 10:44:58.411483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.613 [2024-07-14 10:44:58.411500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-07-14 10:44:58.421300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.613 [2024-07-14 10:44:58.421358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.613 [2024-07-14 10:44:58.421372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.613 [2024-07-14 10:44:58.421379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.613 [2024-07-14 10:44:58.421386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.613 [2024-07-14 10:44:58.421400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-07-14 10:44:58.431375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.613 [2024-07-14 10:44:58.431439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.613 [2024-07-14 10:44:58.431453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.613 [2024-07-14 10:44:58.431460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.613 [2024-07-14 10:44:58.431466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.613 [2024-07-14 10:44:58.431480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-07-14 10:44:58.441434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.613 [2024-07-14 10:44:58.441489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.613 [2024-07-14 10:44:58.441503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.613 [2024-07-14 10:44:58.441510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.613 [2024-07-14 10:44:58.441516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.613 [2024-07-14 10:44:58.441531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-07-14 10:44:58.451465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.613 [2024-07-14 10:44:58.451517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.613 [2024-07-14 10:44:58.451531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.613 [2024-07-14 10:44:58.451538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.613 [2024-07-14 10:44:58.451544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.613 [2024-07-14 10:44:58.451558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-07-14 10:44:58.461483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.613 [2024-07-14 10:44:58.461551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.613 [2024-07-14 10:44:58.461568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.613 [2024-07-14 10:44:58.461575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.613 [2024-07-14 10:44:58.461581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.613 [2024-07-14 10:44:58.461595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-07-14 10:44:58.471511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.613 [2024-07-14 10:44:58.471567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.613 [2024-07-14 10:44:58.471581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.613 [2024-07-14 10:44:58.471589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.613 [2024-07-14 10:44:58.471595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.613 [2024-07-14 10:44:58.471609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-07-14 10:44:58.481542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.613 [2024-07-14 10:44:58.481597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.613 [2024-07-14 10:44:58.481611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.613 [2024-07-14 10:44:58.481618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.613 [2024-07-14 10:44:58.481624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.613 [2024-07-14 10:44:58.481638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-07-14 10:44:58.491605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.613 [2024-07-14 10:44:58.491660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.613 [2024-07-14 10:44:58.491674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.613 [2024-07-14 10:44:58.491681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.613 [2024-07-14 10:44:58.491687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.613 [2024-07-14 10:44:58.491701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-07-14 10:44:58.501610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.614 [2024-07-14 10:44:58.501666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.614 [2024-07-14 10:44:58.501682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.614 [2024-07-14 10:44:58.501689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.614 [2024-07-14 10:44:58.501699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.614 [2024-07-14 10:44:58.501714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-07-14 10:44:58.511657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.614 [2024-07-14 10:44:58.511712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.614 [2024-07-14 10:44:58.511727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.614 [2024-07-14 10:44:58.511735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.614 [2024-07-14 10:44:58.511741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.614 [2024-07-14 10:44:58.511755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-07-14 10:44:58.521646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.614 [2024-07-14 10:44:58.521701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.614 [2024-07-14 10:44:58.521715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.614 [2024-07-14 10:44:58.521723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.614 [2024-07-14 10:44:58.521729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.614 [2024-07-14 10:44:58.521743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-07-14 10:44:58.531615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.614 [2024-07-14 10:44:58.531669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.614 [2024-07-14 10:44:58.531683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.614 [2024-07-14 10:44:58.531690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.614 [2024-07-14 10:44:58.531696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.614 [2024-07-14 10:44:58.531711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-07-14 10:44:58.541714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.614 [2024-07-14 10:44:58.541814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.614 [2024-07-14 10:44:58.541828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.614 [2024-07-14 10:44:58.541837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.614 [2024-07-14 10:44:58.541843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.614 [2024-07-14 10:44:58.541857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-07-14 10:44:58.551755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.614 [2024-07-14 10:44:58.551820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.614 [2024-07-14 10:44:58.551834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.614 [2024-07-14 10:44:58.551842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.614 [2024-07-14 10:44:58.551848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.614 [2024-07-14 10:44:58.551862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-07-14 10:44:58.561815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.614 [2024-07-14 10:44:58.561875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.614 [2024-07-14 10:44:58.561888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.614 [2024-07-14 10:44:58.561895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.614 [2024-07-14 10:44:58.561901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.614 [2024-07-14 10:44:58.561916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-07-14 10:44:58.571836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.614 [2024-07-14 10:44:58.571904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.614 [2024-07-14 10:44:58.571918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.614 [2024-07-14 10:44:58.571925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.614 [2024-07-14 10:44:58.571931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.614 [2024-07-14 10:44:58.571945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-07-14 10:44:58.581857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.614 [2024-07-14 10:44:58.581923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.614 [2024-07-14 10:44:58.581938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.614 [2024-07-14 10:44:58.581945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.614 [2024-07-14 10:44:58.581951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.614 [2024-07-14 10:44:58.581965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.875 [2024-07-14 10:44:58.591909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.875 [2024-07-14 10:44:58.591967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.875 [2024-07-14 10:44:58.591982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.875 [2024-07-14 10:44:58.591989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.875 [2024-07-14 10:44:58.591998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.875 [2024-07-14 10:44:58.592013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.875 qpair failed and we were unable to recover it. 00:36:13.875 [2024-07-14 10:44:58.601888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.875 [2024-07-14 10:44:58.601946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.875 [2024-07-14 10:44:58.601960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.875 [2024-07-14 10:44:58.601967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.875 [2024-07-14 10:44:58.601973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.875 [2024-07-14 10:44:58.601987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.875 qpair failed and we were unable to recover it. 00:36:13.875 [2024-07-14 10:44:58.611924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.875 [2024-07-14 10:44:58.611985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.875 [2024-07-14 10:44:58.612008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.875 [2024-07-14 10:44:58.612015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.875 [2024-07-14 10:44:58.612021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.875 [2024-07-14 10:44:58.612036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.875 qpair failed and we were unable to recover it. 00:36:13.875 [2024-07-14 10:44:58.621948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.875 [2024-07-14 10:44:58.622003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.875 [2024-07-14 10:44:58.622017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.875 [2024-07-14 10:44:58.622024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.875 [2024-07-14 10:44:58.622030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.875 [2024-07-14 10:44:58.622045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.875 qpair failed and we were unable to recover it. 00:36:13.875 [2024-07-14 10:44:58.631964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.875 [2024-07-14 10:44:58.632024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.875 [2024-07-14 10:44:58.632038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.875 [2024-07-14 10:44:58.632045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.875 [2024-07-14 10:44:58.632051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.875 [2024-07-14 10:44:58.632065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.875 qpair failed and we were unable to recover it. 00:36:13.875 [2024-07-14 10:44:58.642005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.875 [2024-07-14 10:44:58.642056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.875 [2024-07-14 10:44:58.642070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.876 [2024-07-14 10:44:58.642077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.876 [2024-07-14 10:44:58.642083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.876 [2024-07-14 10:44:58.642098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.876 qpair failed and we were unable to recover it. 00:36:13.876 [2024-07-14 10:44:58.652025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.876 [2024-07-14 10:44:58.652092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.876 [2024-07-14 10:44:58.652106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.876 [2024-07-14 10:44:58.652113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.876 [2024-07-14 10:44:58.652119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.876 [2024-07-14 10:44:58.652133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.876 qpair failed and we were unable to recover it. 00:36:13.876 [2024-07-14 10:44:58.662069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.876 [2024-07-14 10:44:58.662132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.876 [2024-07-14 10:44:58.662146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.876 [2024-07-14 10:44:58.662154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.876 [2024-07-14 10:44:58.662160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.876 [2024-07-14 10:44:58.662174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.876 qpair failed and we were unable to recover it. 00:36:13.876 [2024-07-14 10:44:58.672073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.876 [2024-07-14 10:44:58.672175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.876 [2024-07-14 10:44:58.672190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.876 [2024-07-14 10:44:58.672197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.876 [2024-07-14 10:44:58.672204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.876 [2024-07-14 10:44:58.672219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.876 qpair failed and we were unable to recover it. 00:36:13.876 [2024-07-14 10:44:58.682113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.876 [2024-07-14 10:44:58.682167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.876 [2024-07-14 10:44:58.682181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.876 [2024-07-14 10:44:58.682191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.876 [2024-07-14 10:44:58.682197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.876 [2024-07-14 10:44:58.682211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.876 qpair failed and we were unable to recover it. 00:36:13.876 [2024-07-14 10:44:58.692174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.876 [2024-07-14 10:44:58.692228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.876 [2024-07-14 10:44:58.692243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.876 [2024-07-14 10:44:58.692250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.876 [2024-07-14 10:44:58.692256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.876 [2024-07-14 10:44:58.692270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.876 qpair failed and we were unable to recover it. 00:36:13.876 [2024-07-14 10:44:58.702164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.876 [2024-07-14 10:44:58.702220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.876 [2024-07-14 10:44:58.702239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.876 [2024-07-14 10:44:58.702246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.876 [2024-07-14 10:44:58.702252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.876 [2024-07-14 10:44:58.702267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.876 qpair failed and we were unable to recover it. 00:36:13.876 [2024-07-14 10:44:58.712187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.876 [2024-07-14 10:44:58.712249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.876 [2024-07-14 10:44:58.712264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.876 [2024-07-14 10:44:58.712271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.876 [2024-07-14 10:44:58.712277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.876 [2024-07-14 10:44:58.712291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.876 qpair failed and we were unable to recover it. 00:36:13.876 [2024-07-14 10:44:58.722227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.876 [2024-07-14 10:44:58.722281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.876 [2024-07-14 10:44:58.722295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.876 [2024-07-14 10:44:58.722302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.876 [2024-07-14 10:44:58.722308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.876 [2024-07-14 10:44:58.722322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.876 qpair failed and we were unable to recover it. 00:36:13.876 [2024-07-14 10:44:58.732251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.876 [2024-07-14 10:44:58.732309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.876 [2024-07-14 10:44:58.732323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.876 [2024-07-14 10:44:58.732331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.876 [2024-07-14 10:44:58.732337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.876 [2024-07-14 10:44:58.732352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.876 qpair failed and we were unable to recover it. 00:36:13.876 [2024-07-14 10:44:58.742300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.876 [2024-07-14 10:44:58.742374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.876 [2024-07-14 10:44:58.742389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.876 [2024-07-14 10:44:58.742396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.876 [2024-07-14 10:44:58.742402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.876 [2024-07-14 10:44:58.742416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.876 qpair failed and we were unable to recover it. 00:36:13.876 [2024-07-14 10:44:58.752252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.876 [2024-07-14 10:44:58.752307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.876 [2024-07-14 10:44:58.752321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.876 [2024-07-14 10:44:58.752328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.876 [2024-07-14 10:44:58.752334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.876 [2024-07-14 10:44:58.752348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.876 qpair failed and we were unable to recover it. 00:36:13.876 [2024-07-14 10:44:58.762375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.876 [2024-07-14 10:44:58.762434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.876 [2024-07-14 10:44:58.762448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.876 [2024-07-14 10:44:58.762455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.876 [2024-07-14 10:44:58.762461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.876 [2024-07-14 10:44:58.762475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.876 qpair failed and we were unable to recover it. 00:36:13.876 [2024-07-14 10:44:58.772411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.876 [2024-07-14 10:44:58.772469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.876 [2024-07-14 10:44:58.772486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.876 [2024-07-14 10:44:58.772493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.876 [2024-07-14 10:44:58.772499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.876 [2024-07-14 10:44:58.772513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.876 qpair failed and we were unable to recover it. 00:36:13.876 [2024-07-14 10:44:58.782408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.876 [2024-07-14 10:44:58.782462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.876 [2024-07-14 10:44:58.782476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.876 [2024-07-14 10:44:58.782482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.876 [2024-07-14 10:44:58.782489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.877 [2024-07-14 10:44:58.782503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.877 qpair failed and we were unable to recover it. 00:36:13.877 [2024-07-14 10:44:58.792461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.877 [2024-07-14 10:44:58.792534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.877 [2024-07-14 10:44:58.792550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.877 [2024-07-14 10:44:58.792556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.877 [2024-07-14 10:44:58.792562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.877 [2024-07-14 10:44:58.792576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.877 qpair failed and we were unable to recover it. 00:36:13.877 [2024-07-14 10:44:58.802510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.877 [2024-07-14 10:44:58.802614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.877 [2024-07-14 10:44:58.802628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.877 [2024-07-14 10:44:58.802636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.877 [2024-07-14 10:44:58.802642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.877 [2024-07-14 10:44:58.802657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.877 qpair failed and we were unable to recover it. 00:36:13.877 [2024-07-14 10:44:58.812487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.877 [2024-07-14 10:44:58.812539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.877 [2024-07-14 10:44:58.812555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.877 [2024-07-14 10:44:58.812561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.877 [2024-07-14 10:44:58.812568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.877 [2024-07-14 10:44:58.812585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.877 qpair failed and we were unable to recover it. 00:36:13.877 [2024-07-14 10:44:58.822513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.877 [2024-07-14 10:44:58.822571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.877 [2024-07-14 10:44:58.822585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.877 [2024-07-14 10:44:58.822592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.877 [2024-07-14 10:44:58.822599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.877 [2024-07-14 10:44:58.822614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.877 qpair failed and we were unable to recover it. 00:36:13.877 [2024-07-14 10:44:58.832574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.877 [2024-07-14 10:44:58.832628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.877 [2024-07-14 10:44:58.832641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.877 [2024-07-14 10:44:58.832648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.877 [2024-07-14 10:44:58.832654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.877 [2024-07-14 10:44:58.832668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.877 qpair failed and we were unable to recover it. 00:36:13.877 [2024-07-14 10:44:58.842638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.877 [2024-07-14 10:44:58.842697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.877 [2024-07-14 10:44:58.842711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.877 [2024-07-14 10:44:58.842718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.877 [2024-07-14 10:44:58.842724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.877 [2024-07-14 10:44:58.842738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.877 qpair failed and we were unable to recover it. 00:36:13.877 [2024-07-14 10:44:58.852602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.877 [2024-07-14 10:44:58.852652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.877 [2024-07-14 10:44:58.852669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.877 [2024-07-14 10:44:58.852676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.877 [2024-07-14 10:44:58.852682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:13.877 [2024-07-14 10:44:58.852697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.877 qpair failed and we were unable to recover it. 00:36:14.138 [2024-07-14 10:44:58.862638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.138 [2024-07-14 10:44:58.862706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.138 [2024-07-14 10:44:58.862723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.138 [2024-07-14 10:44:58.862730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.138 [2024-07-14 10:44:58.862736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.138 [2024-07-14 10:44:58.862750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.138 qpair failed and we were unable to recover it. 00:36:14.138 [2024-07-14 10:44:58.872657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.138 [2024-07-14 10:44:58.872716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.138 [2024-07-14 10:44:58.872730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.138 [2024-07-14 10:44:58.872737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.138 [2024-07-14 10:44:58.872743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.138 [2024-07-14 10:44:58.872756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.138 qpair failed and we were unable to recover it. 00:36:14.138 [2024-07-14 10:44:58.882692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.138 [2024-07-14 10:44:58.882745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.138 [2024-07-14 10:44:58.882759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.138 [2024-07-14 10:44:58.882766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.138 [2024-07-14 10:44:58.882772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.138 [2024-07-14 10:44:58.882786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.138 qpair failed and we were unable to recover it. 00:36:14.138 [2024-07-14 10:44:58.892722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.138 [2024-07-14 10:44:58.892775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.138 [2024-07-14 10:44:58.892789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.138 [2024-07-14 10:44:58.892797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.138 [2024-07-14 10:44:58.892803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.138 [2024-07-14 10:44:58.892817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.138 qpair failed and we were unable to recover it. 00:36:14.138 [2024-07-14 10:44:58.902744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.138 [2024-07-14 10:44:58.902849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.138 [2024-07-14 10:44:58.902863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.138 [2024-07-14 10:44:58.902870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.138 [2024-07-14 10:44:58.902877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.138 [2024-07-14 10:44:58.902894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.138 qpair failed and we were unable to recover it. 00:36:14.138 [2024-07-14 10:44:58.912757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.138 [2024-07-14 10:44:58.912826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.138 [2024-07-14 10:44:58.912840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.138 [2024-07-14 10:44:58.912847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.138 [2024-07-14 10:44:58.912854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.138 [2024-07-14 10:44:58.912868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.138 qpair failed and we were unable to recover it. 00:36:14.138 [2024-07-14 10:44:58.922805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.138 [2024-07-14 10:44:58.922857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.138 [2024-07-14 10:44:58.922871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.138 [2024-07-14 10:44:58.922877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.138 [2024-07-14 10:44:58.922884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.138 [2024-07-14 10:44:58.922899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.138 qpair failed and we were unable to recover it. 00:36:14.138 [2024-07-14 10:44:58.932908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.138 [2024-07-14 10:44:58.932995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.138 [2024-07-14 10:44:58.933010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.138 [2024-07-14 10:44:58.933017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.138 [2024-07-14 10:44:58.933023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.138 [2024-07-14 10:44:58.933037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.138 qpair failed and we were unable to recover it. 00:36:14.138 [2024-07-14 10:44:58.942863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.138 [2024-07-14 10:44:58.942918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.138 [2024-07-14 10:44:58.942932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.138 [2024-07-14 10:44:58.942939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.138 [2024-07-14 10:44:58.942945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.138 [2024-07-14 10:44:58.942960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.138 qpair failed and we were unable to recover it. 00:36:14.138 [2024-07-14 10:44:58.952901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.138 [2024-07-14 10:44:58.952961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.138 [2024-07-14 10:44:58.952975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.138 [2024-07-14 10:44:58.952982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.138 [2024-07-14 10:44:58.952988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.138 [2024-07-14 10:44:58.953002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.138 qpair failed and we were unable to recover it. 00:36:14.138 [2024-07-14 10:44:58.962964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.138 [2024-07-14 10:44:58.963021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.138 [2024-07-14 10:44:58.963035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.138 [2024-07-14 10:44:58.963042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.138 [2024-07-14 10:44:58.963048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.138 [2024-07-14 10:44:58.963062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.138 qpair failed and we were unable to recover it. 00:36:14.138 [2024-07-14 10:44:58.972965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.138 [2024-07-14 10:44:58.973043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.138 [2024-07-14 10:44:58.973059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.138 [2024-07-14 10:44:58.973066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.138 [2024-07-14 10:44:58.973072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.138 [2024-07-14 10:44:58.973086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.138 qpair failed and we were unable to recover it. 00:36:14.138 [2024-07-14 10:44:58.982914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.138 [2024-07-14 10:44:58.982984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.138 [2024-07-14 10:44:58.983000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.138 [2024-07-14 10:44:58.983008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.138 [2024-07-14 10:44:58.983015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.138 [2024-07-14 10:44:58.983029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.138 qpair failed and we were unable to recover it. 00:36:14.138 [2024-07-14 10:44:58.993035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.138 [2024-07-14 10:44:58.993093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.138 [2024-07-14 10:44:58.993107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.138 [2024-07-14 10:44:58.993115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.138 [2024-07-14 10:44:58.993125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.139 [2024-07-14 10:44:58.993139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.139 qpair failed and we were unable to recover it. 00:36:14.139 [2024-07-14 10:44:59.003027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.139 [2024-07-14 10:44:59.003082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.139 [2024-07-14 10:44:59.003096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.139 [2024-07-14 10:44:59.003103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.139 [2024-07-14 10:44:59.003110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.139 [2024-07-14 10:44:59.003124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.139 qpair failed and we were unable to recover it. 00:36:14.139 [2024-07-14 10:44:59.013074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.139 [2024-07-14 10:44:59.013135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.139 [2024-07-14 10:44:59.013149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.139 [2024-07-14 10:44:59.013157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.139 [2024-07-14 10:44:59.013163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.139 [2024-07-14 10:44:59.013177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.139 qpair failed and we were unable to recover it. 00:36:14.139 [2024-07-14 10:44:59.023114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.139 [2024-07-14 10:44:59.023191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.139 [2024-07-14 10:44:59.023206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.139 [2024-07-14 10:44:59.023213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.139 [2024-07-14 10:44:59.023219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.139 [2024-07-14 10:44:59.023236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.139 qpair failed and we were unable to recover it. 00:36:14.139 [2024-07-14 10:44:59.033133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.139 [2024-07-14 10:44:59.033200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.139 [2024-07-14 10:44:59.033214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.139 [2024-07-14 10:44:59.033220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.139 [2024-07-14 10:44:59.033231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.139 [2024-07-14 10:44:59.033245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.139 qpair failed and we were unable to recover it. 00:36:14.139 [2024-07-14 10:44:59.043188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.139 [2024-07-14 10:44:59.043262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.139 [2024-07-14 10:44:59.043278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.139 [2024-07-14 10:44:59.043286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.139 [2024-07-14 10:44:59.043293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.139 [2024-07-14 10:44:59.043308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.139 qpair failed and we were unable to recover it. 00:36:14.139 [2024-07-14 10:44:59.053193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.139 [2024-07-14 10:44:59.053252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.139 [2024-07-14 10:44:59.053266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.139 [2024-07-14 10:44:59.053273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.139 [2024-07-14 10:44:59.053279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.139 [2024-07-14 10:44:59.053293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.139 qpair failed and we were unable to recover it. 00:36:14.139 [2024-07-14 10:44:59.063254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.139 [2024-07-14 10:44:59.063329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.139 [2024-07-14 10:44:59.063344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.139 [2024-07-14 10:44:59.063351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.139 [2024-07-14 10:44:59.063357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.139 [2024-07-14 10:44:59.063371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.139 qpair failed and we were unable to recover it. 00:36:14.139 [2024-07-14 10:44:59.073269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.139 [2024-07-14 10:44:59.073326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.139 [2024-07-14 10:44:59.073340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.139 [2024-07-14 10:44:59.073348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.139 [2024-07-14 10:44:59.073354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.139 [2024-07-14 10:44:59.073368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.139 qpair failed and we were unable to recover it. 00:36:14.139 [2024-07-14 10:44:59.083256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.139 [2024-07-14 10:44:59.083318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.139 [2024-07-14 10:44:59.083331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.139 [2024-07-14 10:44:59.083344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.139 [2024-07-14 10:44:59.083350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.139 [2024-07-14 10:44:59.083365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.139 qpair failed and we were unable to recover it. 00:36:14.139 [2024-07-14 10:44:59.093289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.139 [2024-07-14 10:44:59.093341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.139 [2024-07-14 10:44:59.093355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.139 [2024-07-14 10:44:59.093361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.139 [2024-07-14 10:44:59.093367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.139 [2024-07-14 10:44:59.093381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.139 qpair failed and we were unable to recover it. 00:36:14.139 [2024-07-14 10:44:59.103324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.139 [2024-07-14 10:44:59.103381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.139 [2024-07-14 10:44:59.103395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.139 [2024-07-14 10:44:59.103402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.139 [2024-07-14 10:44:59.103408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.139 [2024-07-14 10:44:59.103422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.139 qpair failed and we were unable to recover it. 00:36:14.139 [2024-07-14 10:44:59.113368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.139 [2024-07-14 10:44:59.113462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.139 [2024-07-14 10:44:59.113478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.139 [2024-07-14 10:44:59.113485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.139 [2024-07-14 10:44:59.113492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.139 [2024-07-14 10:44:59.113507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.139 qpair failed and we were unable to recover it. 00:36:14.400 [2024-07-14 10:44:59.123389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.400 [2024-07-14 10:44:59.123448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.400 [2024-07-14 10:44:59.123462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.400 [2024-07-14 10:44:59.123469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.400 [2024-07-14 10:44:59.123477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.400 [2024-07-14 10:44:59.123491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.400 qpair failed and we were unable to recover it. 00:36:14.400 [2024-07-14 10:44:59.133448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.400 [2024-07-14 10:44:59.133502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.400 [2024-07-14 10:44:59.133516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.400 [2024-07-14 10:44:59.133524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.400 [2024-07-14 10:44:59.133530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.400 [2024-07-14 10:44:59.133544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.400 qpair failed and we were unable to recover it. 00:36:14.400 [2024-07-14 10:44:59.143459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.400 [2024-07-14 10:44:59.143541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.400 [2024-07-14 10:44:59.143556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.400 [2024-07-14 10:44:59.143563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.400 [2024-07-14 10:44:59.143569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.400 [2024-07-14 10:44:59.143583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.400 qpair failed and we were unable to recover it. 00:36:14.400 [2024-07-14 10:44:59.153519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.400 [2024-07-14 10:44:59.153578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.400 [2024-07-14 10:44:59.153592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.400 [2024-07-14 10:44:59.153599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.400 [2024-07-14 10:44:59.153604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.400 [2024-07-14 10:44:59.153618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.400 qpair failed and we were unable to recover it. 00:36:14.400 [2024-07-14 10:44:59.163510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.400 [2024-07-14 10:44:59.163569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.400 [2024-07-14 10:44:59.163583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.400 [2024-07-14 10:44:59.163589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.400 [2024-07-14 10:44:59.163596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.400 [2024-07-14 10:44:59.163610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.400 qpair failed and we were unable to recover it. 00:36:14.400 [2024-07-14 10:44:59.173548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.400 [2024-07-14 10:44:59.173605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.400 [2024-07-14 10:44:59.173622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.400 [2024-07-14 10:44:59.173629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.400 [2024-07-14 10:44:59.173635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.400 [2024-07-14 10:44:59.173649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.400 qpair failed and we were unable to recover it. 00:36:14.400 [2024-07-14 10:44:59.183565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.400 [2024-07-14 10:44:59.183623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.400 [2024-07-14 10:44:59.183636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.400 [2024-07-14 10:44:59.183644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.400 [2024-07-14 10:44:59.183651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.400 [2024-07-14 10:44:59.183665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.400 qpair failed and we were unable to recover it. 00:36:14.400 [2024-07-14 10:44:59.193523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.400 [2024-07-14 10:44:59.193577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.400 [2024-07-14 10:44:59.193592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.400 [2024-07-14 10:44:59.193598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.400 [2024-07-14 10:44:59.193604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.400 [2024-07-14 10:44:59.193618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.400 qpair failed and we were unable to recover it. 00:36:14.400 [2024-07-14 10:44:59.203648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.400 [2024-07-14 10:44:59.203706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.400 [2024-07-14 10:44:59.203720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.400 [2024-07-14 10:44:59.203727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.400 [2024-07-14 10:44:59.203733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.400 [2024-07-14 10:44:59.203747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.400 qpair failed and we were unable to recover it. 00:36:14.400 [2024-07-14 10:44:59.213667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.400 [2024-07-14 10:44:59.213723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.400 [2024-07-14 10:44:59.213738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.400 [2024-07-14 10:44:59.213745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.400 [2024-07-14 10:44:59.213751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.400 [2024-07-14 10:44:59.213767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.400 qpair failed and we were unable to recover it. 00:36:14.400 [2024-07-14 10:44:59.223683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.400 [2024-07-14 10:44:59.223751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.400 [2024-07-14 10:44:59.223766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.400 [2024-07-14 10:44:59.223773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.401 [2024-07-14 10:44:59.223779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe7c000b90 00:36:14.401 [2024-07-14 10:44:59.223794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:14.401 qpair failed and we were unable to recover it. 00:36:14.401 [2024-07-14 10:44:59.233766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.401 [2024-07-14 10:44:59.233874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.401 [2024-07-14 10:44:59.233932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.401 [2024-07-14 10:44:59.233958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.401 [2024-07-14 10:44:59.233981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe84000b90 00:36:14.401 [2024-07-14 10:44:59.234033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:14.401 qpair failed and we were unable to recover it. 00:36:14.401 [2024-07-14 10:44:59.243757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.401 [2024-07-14 10:44:59.243879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.401 [2024-07-14 10:44:59.243937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.401 [2024-07-14 10:44:59.243961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.401 [2024-07-14 10:44:59.243984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b1fb60 00:36:14.401 [2024-07-14 10:44:59.244031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.401 qpair failed and we were unable to recover it. 00:36:14.401 [2024-07-14 10:44:59.253795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.401 [2024-07-14 10:44:59.253914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.401 [2024-07-14 10:44:59.253950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.401 [2024-07-14 10:44:59.253966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.401 [2024-07-14 10:44:59.253979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b1fb60 00:36:14.401 [2024-07-14 10:44:59.254010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.401 qpair failed and we were unable to recover it. 00:36:14.401 [2024-07-14 10:44:59.263952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.401 [2024-07-14 10:44:59.264063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.401 [2024-07-14 10:44:59.264127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.401 [2024-07-14 10:44:59.264153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.401 [2024-07-14 10:44:59.264175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe74000b90 00:36:14.401 [2024-07-14 10:44:59.264223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.401 qpair failed and we were unable to recover it. 00:36:14.401 [2024-07-14 10:44:59.273805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.401 [2024-07-14 10:44:59.273875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.401 [2024-07-14 10:44:59.273904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.401 [2024-07-14 10:44:59.273918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.401 [2024-07-14 10:44:59.273932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe74000b90 00:36:14.401 [2024-07-14 10:44:59.273961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.401 qpair failed and we were unable to recover it. 00:36:14.401 [2024-07-14 10:44:59.274061] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:36:14.401 A controller has encountered a failure and is being reset. 00:36:14.401 [2024-07-14 10:44:59.283903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.401 [2024-07-14 10:44:59.284033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.401 [2024-07-14 10:44:59.284081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.401 [2024-07-14 10:44:59.284104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.401 [2024-07-14 10:44:59.284125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe84000b90 00:36:14.401 [2024-07-14 10:44:59.284172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:14.401 qpair failed and we were unable to recover it. 00:36:14.401 Controller properly reset. 00:36:14.401 Initializing NVMe Controllers 00:36:14.401 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:14.401 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:14.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:14.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:14.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:14.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:14.401 Initialization complete. Launching workers. 00:36:14.401 Starting thread on core 1 00:36:14.401 Starting thread on core 2 00:36:14.401 Starting thread on core 3 00:36:14.401 Starting thread on core 0 00:36:14.401 10:44:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:14.401 00:36:14.401 real 0m11.389s 00:36:14.401 user 0m21.600s 00:36:14.401 sys 0m4.645s 00:36:14.401 10:44:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:14.401 10:44:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:14.401 ************************************ 00:36:14.401 END TEST nvmf_target_disconnect_tc2 00:36:14.401 ************************************ 00:36:14.660 10:44:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:36:14.661 10:44:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:14.661 10:44:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:14.661 10:44:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:14.661 10:44:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:14.661 10:44:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:36:14.661 10:44:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:14.661 10:44:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:36:14.661 10:44:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:14.661 10:44:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:14.661 rmmod nvme_tcp 00:36:14.661 rmmod nvme_fabrics 00:36:14.661 rmmod nvme_keyring 00:36:14.661 10:44:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:14.661 10:44:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:36:14.661 10:44:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:36:14.661 10:44:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2630074 ']' 00:36:14.661 10:44:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2630074 00:36:14.661 10:44:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2630074 ']' 00:36:14.661 10:44:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 2630074 00:36:14.661 10:44:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:36:14.661 10:44:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:14.661 10:44:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2630074 00:36:14.661 10:44:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:36:14.661 10:44:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:36:14.661 10:44:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2630074' 00:36:14.661 killing process with pid 2630074 00:36:14.661 10:44:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 2630074 00:36:14.661 10:44:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 2630074 00:36:14.920 10:44:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:14.920 10:44:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:14.920 10:44:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:14.920 10:44:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:14.920 10:44:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:14.920 10:44:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:14.920 10:44:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:14.920 10:44:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:16.827 10:45:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:16.827 00:36:16.827 real 0m19.881s 00:36:16.827 user 0m49.289s 00:36:16.827 sys 0m9.368s 00:36:16.827 10:45:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:16.827 10:45:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:16.827 ************************************ 00:36:16.827 END TEST nvmf_target_disconnect 00:36:16.827 ************************************ 00:36:17.086 10:45:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:36:17.086 10:45:01 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:36:17.086 10:45:01 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:17.086 10:45:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:17.086 10:45:01 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:36:17.086 00:36:17.086 real 28m58.951s 00:36:17.086 user 73m53.747s 00:36:17.086 sys 7m56.201s 00:36:17.086 10:45:01 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:17.086 10:45:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:17.086 ************************************ 00:36:17.086 END TEST nvmf_tcp 00:36:17.086 ************************************ 00:36:17.086 10:45:01 -- common/autotest_common.sh@1142 -- # return 0 00:36:17.086 10:45:01 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:36:17.086 10:45:01 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:17.086 10:45:01 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:17.086 10:45:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:17.086 10:45:01 -- common/autotest_common.sh@10 -- # set +x 00:36:17.086 ************************************ 00:36:17.086 START TEST spdkcli_nvmf_tcp 00:36:17.086 ************************************ 00:36:17.086 10:45:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:17.086 * Looking for test storage... 00:36:17.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:36:17.086 10:45:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:36:17.086 10:45:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:36:17.086 10:45:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:36:17.086 10:45:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:17.086 10:45:02 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:36:17.086 10:45:02 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:17.086 10:45:02 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:17.086 10:45:02 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:17.086 10:45:02 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:17.086 10:45:02 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:17.086 10:45:02 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:17.087 10:45:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:17.346 10:45:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:36:17.346 10:45:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2631729 00:36:17.346 10:45:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2631729 00:36:17.346 10:45:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:36:17.346 10:45:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 2631729 ']' 00:36:17.346 10:45:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:17.346 10:45:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:17.346 10:45:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:17.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:17.346 10:45:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:17.346 10:45:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:17.346 [2024-07-14 10:45:02.113259] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:36:17.346 [2024-07-14 10:45:02.113312] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2631729 ] 00:36:17.346 EAL: No free 2048 kB hugepages reported on node 1 00:36:17.346 [2024-07-14 10:45:02.179477] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:17.346 [2024-07-14 10:45:02.221559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:17.346 [2024-07-14 10:45:02.221561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:17.346 10:45:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:17.346 10:45:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:36:17.346 10:45:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:36:17.346 10:45:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:17.346 10:45:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:17.606 10:45:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:36:17.606 10:45:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:36:17.606 10:45:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:36:17.606 10:45:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:17.606 10:45:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:17.606 10:45:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:36:17.606 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:36:17.606 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:36:17.606 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:36:17.606 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:36:17.606 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:36:17.606 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:36:17.606 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:17.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:36:17.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:36:17.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:17.606 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:17.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:36:17.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:17.606 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:17.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:36:17.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:17.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:17.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:17.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:17.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:36:17.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:36:17.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:17.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:36:17.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:17.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:36:17.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:36:17.606 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:36:17.606 ' 00:36:20.141 [2024-07-14 10:45:04.934872] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:21.518 [2024-07-14 10:45:06.215099] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:36:24.048 [2024-07-14 10:45:08.598482] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:36:25.949 [2024-07-14 10:45:10.656863] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:36:27.325 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:36:27.325 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:36:27.325 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:36:27.325 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:36:27.325 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:36:27.325 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:36:27.325 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:36:27.325 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:27.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:36:27.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:36:27.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:27.325 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:27.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:36:27.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:27.325 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:27.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:36:27.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:27.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:27.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:27.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:27.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:36:27.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:36:27.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:27.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:36:27.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:27.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:36:27.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:36:27.325 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:27.584 10:45:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:27.584 10:45:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:27.584 10:45:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:27.584 10:45:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:27.584 10:45:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:27.584 10:45:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:27.584 10:45:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:36:27.584 10:45:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:27.843 10:45:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:27.843 10:45:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:27.843 10:45:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:27.843 10:45:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:27.843 10:45:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:27.843 10:45:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:27.843 10:45:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:27.843 10:45:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:27.843 10:45:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:27.843 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:27.843 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:27.843 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:27.843 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:27.843 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:27.843 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:27.843 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:27.843 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:27.843 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:27.843 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:27.843 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:27.843 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:27.843 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:27.843 ' 00:36:33.126 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:33.126 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:33.126 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:33.126 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:33.126 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:36:33.126 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:36:33.126 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:33.126 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:33.126 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:33.126 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:33.126 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:33.126 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:33.126 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:33.126 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:33.386 10:45:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:36:33.386 10:45:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:33.386 10:45:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:33.386 10:45:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2631729 00:36:33.386 10:45:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2631729 ']' 00:36:33.386 10:45:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2631729 00:36:33.386 10:45:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:36:33.386 10:45:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:33.386 10:45:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2631729 00:36:33.386 10:45:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:33.386 10:45:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:33.386 10:45:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2631729' 00:36:33.386 killing process with pid 2631729 00:36:33.386 10:45:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 2631729 00:36:33.386 10:45:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 2631729 00:36:33.645 10:45:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:36:33.645 10:45:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:36:33.645 10:45:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2631729 ']' 00:36:33.645 10:45:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2631729 00:36:33.645 10:45:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2631729 ']' 00:36:33.645 10:45:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2631729 00:36:33.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2631729) - No such process 00:36:33.646 10:45:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 2631729 is not found' 00:36:33.646 Process with pid 2631729 is not found 00:36:33.646 10:45:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:36:33.646 10:45:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:36:33.646 10:45:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:36:33.646 00:36:33.646 real 0m16.514s 00:36:33.646 user 0m35.929s 00:36:33.646 sys 0m0.826s 00:36:33.646 10:45:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:33.646 10:45:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:33.646 ************************************ 00:36:33.646 END TEST spdkcli_nvmf_tcp 00:36:33.646 ************************************ 00:36:33.646 10:45:18 -- common/autotest_common.sh@1142 -- # return 0 00:36:33.646 10:45:18 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:33.646 10:45:18 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:33.646 10:45:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:33.646 10:45:18 -- common/autotest_common.sh@10 -- # set +x 00:36:33.646 ************************************ 00:36:33.646 START TEST nvmf_identify_passthru 00:36:33.646 ************************************ 00:36:33.646 10:45:18 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:33.646 * Looking for test storage... 00:36:33.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:33.646 10:45:18 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:33.646 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:36:33.646 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:33.646 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:33.646 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:33.646 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:33.646 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:33.646 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:33.646 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:33.646 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:33.646 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:33.646 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:33.905 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:33.905 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:33.905 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:33.905 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:33.905 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:33.905 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:33.905 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:33.905 10:45:18 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:33.905 10:45:18 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:33.905 10:45:18 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:33.905 10:45:18 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.905 10:45:18 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.906 10:45:18 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.906 10:45:18 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:33.906 10:45:18 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.906 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:36:33.906 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:33.906 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:33.906 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:33.906 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:33.906 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:33.906 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:33.906 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:33.906 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:33.906 10:45:18 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:33.906 10:45:18 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:33.906 10:45:18 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:33.906 10:45:18 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:33.906 10:45:18 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.906 10:45:18 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.906 10:45:18 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.906 10:45:18 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:33.906 10:45:18 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.906 10:45:18 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:36:33.906 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:33.906 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:33.906 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:33.906 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:33.906 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:33.906 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:33.906 10:45:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:33.906 10:45:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:33.906 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:33.906 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:33.906 10:45:18 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:36:33.906 10:45:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:39.182 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:39.183 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:39.183 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:39.183 Found net devices under 0000:86:00.0: cvl_0_0 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:39.183 Found net devices under 0000:86:00.1: cvl_0_1 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:39.183 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:39.442 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:39.442 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:39.442 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:39.442 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:39.442 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:39.442 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:39.442 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:39.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:39.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:36:39.442 00:36:39.442 --- 10.0.0.2 ping statistics --- 00:36:39.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:39.442 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:36:39.442 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:39.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:39.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:36:39.442 00:36:39.442 --- 10.0.0.1 ping statistics --- 00:36:39.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:39.442 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:36:39.442 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:39.442 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:36:39.442 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:39.442 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:39.442 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:39.442 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:39.442 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:39.442 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:39.442 10:45:24 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:39.442 10:45:24 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:39.442 10:45:24 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:39.442 10:45:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:39.442 10:45:24 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:39.442 10:45:24 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:36:39.442 10:45:24 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:36:39.442 10:45:24 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:36:39.442 10:45:24 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:36:39.442 10:45:24 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:36:39.442 10:45:24 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:36:39.442 10:45:24 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:39.442 10:45:24 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:39.442 10:45:24 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:36:39.701 10:45:24 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:36:39.701 10:45:24 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:36:39.702 10:45:24 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:5e:00.0 00:36:39.702 10:45:24 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:36:39.702 10:45:24 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:36:39.702 10:45:24 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:39.702 10:45:24 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:36:39.702 10:45:24 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:39.702 EAL: No free 2048 kB hugepages reported on node 1 00:36:43.891 10:45:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:36:43.891 10:45:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:36:43.891 10:45:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:43.891 10:45:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:43.891 EAL: No free 2048 kB hugepages reported on node 1 00:36:48.078 10:45:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:36:48.078 10:45:32 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:48.078 10:45:32 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:48.078 10:45:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:48.078 10:45:32 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:48.078 10:45:32 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:48.078 10:45:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:48.078 10:45:32 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2639137 00:36:48.078 10:45:32 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:48.078 10:45:32 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:48.078 10:45:32 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2639137 00:36:48.078 10:45:32 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 2639137 ']' 00:36:48.078 10:45:32 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:48.078 10:45:32 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:48.078 10:45:32 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:48.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:48.078 10:45:32 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:48.078 10:45:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:48.078 [2024-07-14 10:45:32.884601] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:36:48.078 [2024-07-14 10:45:32.884651] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:48.078 EAL: No free 2048 kB hugepages reported on node 1 00:36:48.078 [2024-07-14 10:45:32.942269] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:48.078 [2024-07-14 10:45:32.984439] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:48.078 [2024-07-14 10:45:32.984479] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:48.078 [2024-07-14 10:45:32.984487] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:48.078 [2024-07-14 10:45:32.984494] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:48.078 [2024-07-14 10:45:32.984498] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:48.078 [2024-07-14 10:45:32.986244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:48.078 [2024-07-14 10:45:32.986298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:48.078 [2024-07-14 10:45:32.986420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:48.078 [2024-07-14 10:45:32.986420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:48.364 10:45:33 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:48.364 10:45:33 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:36:48.364 10:45:33 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:48.364 10:45:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:48.364 10:45:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:48.364 INFO: Log level set to 20 00:36:48.364 INFO: Requests: 00:36:48.364 { 00:36:48.364 "jsonrpc": "2.0", 00:36:48.364 "method": "nvmf_set_config", 00:36:48.364 "id": 1, 00:36:48.364 "params": { 00:36:48.364 "admin_cmd_passthru": { 00:36:48.364 "identify_ctrlr": true 00:36:48.364 } 00:36:48.364 } 00:36:48.364 } 00:36:48.364 00:36:48.364 INFO: response: 00:36:48.364 { 00:36:48.364 "jsonrpc": "2.0", 00:36:48.364 "id": 1, 00:36:48.364 "result": true 00:36:48.364 } 00:36:48.364 00:36:48.364 10:45:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:48.364 10:45:33 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:48.364 10:45:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:48.364 10:45:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:48.364 INFO: Setting log level to 20 00:36:48.364 INFO: Setting log level to 20 00:36:48.364 INFO: Log level set to 20 00:36:48.364 INFO: Log level set to 20 00:36:48.364 INFO: Requests: 00:36:48.364 { 00:36:48.364 "jsonrpc": "2.0", 00:36:48.364 "method": "framework_start_init", 00:36:48.364 "id": 1 00:36:48.364 } 00:36:48.364 00:36:48.364 INFO: Requests: 00:36:48.364 { 00:36:48.364 "jsonrpc": "2.0", 00:36:48.364 "method": "framework_start_init", 00:36:48.364 "id": 1 00:36:48.364 } 00:36:48.364 00:36:48.364 [2024-07-14 10:45:33.161185] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:48.364 INFO: response: 00:36:48.364 { 00:36:48.364 "jsonrpc": "2.0", 00:36:48.364 "id": 1, 00:36:48.364 "result": true 00:36:48.364 } 00:36:48.364 00:36:48.364 INFO: response: 00:36:48.364 { 00:36:48.364 "jsonrpc": "2.0", 00:36:48.364 "id": 1, 00:36:48.364 "result": true 00:36:48.364 } 00:36:48.364 00:36:48.364 10:45:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:48.364 10:45:33 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:48.364 10:45:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:48.364 10:45:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:48.364 INFO: Setting log level to 40 00:36:48.364 INFO: Setting log level to 40 00:36:48.364 INFO: Setting log level to 40 00:36:48.364 [2024-07-14 10:45:33.174938] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:48.364 10:45:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:48.364 10:45:33 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:48.364 10:45:33 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:48.364 10:45:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:48.364 10:45:33 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:36:48.364 10:45:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:48.364 10:45:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:51.650 Nvme0n1 00:36:51.650 10:45:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.650 10:45:36 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:51.650 10:45:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.650 10:45:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:51.650 10:45:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.650 10:45:36 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:51.650 10:45:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.650 10:45:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:51.650 10:45:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.650 10:45:36 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:51.650 10:45:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.650 10:45:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:51.650 [2024-07-14 10:45:36.064737] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:51.650 10:45:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.650 10:45:36 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:51.650 10:45:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.650 10:45:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:51.650 [ 00:36:51.650 { 00:36:51.650 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:51.650 "subtype": "Discovery", 00:36:51.650 "listen_addresses": [], 00:36:51.650 "allow_any_host": true, 00:36:51.650 "hosts": [] 00:36:51.650 }, 00:36:51.650 { 00:36:51.650 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:51.650 "subtype": "NVMe", 00:36:51.650 "listen_addresses": [ 00:36:51.650 { 00:36:51.650 "trtype": "TCP", 00:36:51.650 "adrfam": "IPv4", 00:36:51.650 "traddr": "10.0.0.2", 00:36:51.650 "trsvcid": "4420" 00:36:51.650 } 00:36:51.650 ], 00:36:51.650 "allow_any_host": true, 00:36:51.650 "hosts": [], 00:36:51.650 "serial_number": "SPDK00000000000001", 00:36:51.650 "model_number": "SPDK bdev Controller", 00:36:51.650 "max_namespaces": 1, 00:36:51.650 "min_cntlid": 1, 00:36:51.650 "max_cntlid": 65519, 00:36:51.650 "namespaces": [ 00:36:51.650 { 00:36:51.650 "nsid": 1, 00:36:51.650 "bdev_name": "Nvme0n1", 00:36:51.650 "name": "Nvme0n1", 00:36:51.650 "nguid": "B3AE4792B20B42589E610B0962C4C08F", 00:36:51.650 "uuid": "b3ae4792-b20b-4258-9e61-0b0962c4c08f" 00:36:51.650 } 00:36:51.650 ] 00:36:51.650 } 00:36:51.650 ] 00:36:51.650 10:45:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.650 10:45:36 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:51.650 10:45:36 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:51.650 10:45:36 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:51.650 EAL: No free 2048 kB hugepages reported on node 1 00:36:51.650 10:45:36 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:36:51.650 10:45:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:51.650 10:45:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:51.650 10:45:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:51.650 EAL: No free 2048 kB hugepages reported on node 1 00:36:51.650 10:45:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:36:51.650 10:45:36 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:36:51.650 10:45:36 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:36:51.650 10:45:36 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:51.650 10:45:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.650 10:45:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:51.650 10:45:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.650 10:45:36 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:51.650 10:45:36 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:51.650 10:45:36 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:51.650 10:45:36 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:36:51.650 10:45:36 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:51.650 10:45:36 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:36:51.650 10:45:36 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:51.650 10:45:36 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:51.650 rmmod nvme_tcp 00:36:51.650 rmmod nvme_fabrics 00:36:51.650 rmmod nvme_keyring 00:36:51.650 10:45:36 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:51.650 10:45:36 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:36:51.650 10:45:36 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:36:51.650 10:45:36 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2639137 ']' 00:36:51.650 10:45:36 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2639137 00:36:51.650 10:45:36 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 2639137 ']' 00:36:51.650 10:45:36 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 2639137 00:36:51.650 10:45:36 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:36:51.650 10:45:36 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:51.650 10:45:36 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2639137 00:36:51.650 10:45:36 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:51.650 10:45:36 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:51.650 10:45:36 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2639137' 00:36:51.650 killing process with pid 2639137 00:36:51.650 10:45:36 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 2639137 00:36:51.650 10:45:36 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 2639137 00:36:53.557 10:45:38 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:53.557 10:45:38 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:53.557 10:45:38 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:53.557 10:45:38 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:53.557 10:45:38 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:53.557 10:45:38 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:53.557 10:45:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:53.557 10:45:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:55.463 10:45:40 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:55.463 00:36:55.463 real 0m21.599s 00:36:55.463 user 0m27.708s 00:36:55.463 sys 0m5.092s 00:36:55.463 10:45:40 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:55.463 10:45:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:55.463 ************************************ 00:36:55.463 END TEST nvmf_identify_passthru 00:36:55.463 ************************************ 00:36:55.463 10:45:40 -- common/autotest_common.sh@1142 -- # return 0 00:36:55.463 10:45:40 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:55.463 10:45:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:55.463 10:45:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:55.463 10:45:40 -- common/autotest_common.sh@10 -- # set +x 00:36:55.463 ************************************ 00:36:55.463 START TEST nvmf_dif 00:36:55.463 ************************************ 00:36:55.463 10:45:40 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:55.463 * Looking for test storage... 00:36:55.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:55.463 10:45:40 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:55.463 10:45:40 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:55.463 10:45:40 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:55.463 10:45:40 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:55.463 10:45:40 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:55.463 10:45:40 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:55.463 10:45:40 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:55.463 10:45:40 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:55.463 10:45:40 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:55.463 10:45:40 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:55.463 10:45:40 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:55.463 10:45:40 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:55.463 10:45:40 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:55.463 10:45:40 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:55.463 10:45:40 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:55.464 10:45:40 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:55.464 10:45:40 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:55.464 10:45:40 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:55.464 10:45:40 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:55.464 10:45:40 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:55.464 10:45:40 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:55.464 10:45:40 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:55.464 10:45:40 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:36:55.464 10:45:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:37:02.033 Found 0000:86:00.0 (0x8086 - 0x159b) 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:37:02.033 Found 0000:86:00.1 (0x8086 - 0x159b) 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:02.033 10:45:45 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:37:02.034 Found net devices under 0000:86:00.0: cvl_0_0 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:37:02.034 Found net devices under 0000:86:00.1: cvl_0_1 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:02.034 10:45:45 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:02.034 10:45:46 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:02.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:02.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:37:02.034 00:37:02.034 --- 10.0.0.2 ping statistics --- 00:37:02.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:02.034 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:37:02.034 10:45:46 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:02.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:02.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:37:02.034 00:37:02.034 --- 10.0.0.1 ping statistics --- 00:37:02.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:02.034 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:37:02.034 10:45:46 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:02.034 10:45:46 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:37:02.034 10:45:46 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:37:02.034 10:45:46 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:03.939 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:37:03.939 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:37:03.939 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:37:03.939 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:37:03.939 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:37:03.939 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:37:03.939 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:37:03.939 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:37:03.939 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:37:03.939 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:37:03.939 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:37:03.939 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:37:03.939 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:37:03.939 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:37:03.939 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:37:03.939 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:37:03.939 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:37:03.939 10:45:48 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:03.939 10:45:48 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:03.939 10:45:48 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:03.939 10:45:48 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:03.939 10:45:48 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:03.939 10:45:48 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:03.939 10:45:48 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:37:03.939 10:45:48 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:37:03.939 10:45:48 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:03.939 10:45:48 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:03.939 10:45:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:03.939 10:45:48 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2644593 00:37:03.939 10:45:48 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2644593 00:37:03.939 10:45:48 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:37:03.939 10:45:48 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 2644593 ']' 00:37:03.939 10:45:48 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:03.939 10:45:48 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:03.939 10:45:48 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:03.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:03.939 10:45:48 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:03.939 10:45:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:04.198 [2024-07-14 10:45:48.921056] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:37:04.198 [2024-07-14 10:45:48.921099] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:04.198 EAL: No free 2048 kB hugepages reported on node 1 00:37:04.198 [2024-07-14 10:45:48.992630] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:04.198 [2024-07-14 10:45:49.033260] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:04.198 [2024-07-14 10:45:49.033299] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:04.198 [2024-07-14 10:45:49.033306] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:04.198 [2024-07-14 10:45:49.033312] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:04.198 [2024-07-14 10:45:49.033318] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:04.198 [2024-07-14 10:45:49.033334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:04.767 10:45:49 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:04.767 10:45:49 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:37:04.767 10:45:49 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:04.767 10:45:49 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:04.767 10:45:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:05.027 10:45:49 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:05.027 10:45:49 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:37:05.027 10:45:49 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:37:05.027 10:45:49 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:05.027 10:45:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:05.027 [2024-07-14 10:45:49.759622] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:05.027 10:45:49 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:05.027 10:45:49 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:37:05.027 10:45:49 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:05.027 10:45:49 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:05.027 10:45:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:05.027 ************************************ 00:37:05.027 START TEST fio_dif_1_default 00:37:05.027 ************************************ 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:05.027 bdev_null0 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:05.027 [2024-07-14 10:45:49.831904] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:05.027 { 00:37:05.027 "params": { 00:37:05.027 "name": "Nvme$subsystem", 00:37:05.027 "trtype": "$TEST_TRANSPORT", 00:37:05.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:05.027 "adrfam": "ipv4", 00:37:05.027 "trsvcid": "$NVMF_PORT", 00:37:05.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:05.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:05.027 "hdgst": ${hdgst:-false}, 00:37:05.027 "ddgst": ${ddgst:-false} 00:37:05.027 }, 00:37:05.027 "method": "bdev_nvme_attach_controller" 00:37:05.027 } 00:37:05.027 EOF 00:37:05.027 )") 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:05.027 "params": { 00:37:05.027 "name": "Nvme0", 00:37:05.027 "trtype": "tcp", 00:37:05.027 "traddr": "10.0.0.2", 00:37:05.027 "adrfam": "ipv4", 00:37:05.027 "trsvcid": "4420", 00:37:05.027 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:05.027 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:05.027 "hdgst": false, 00:37:05.027 "ddgst": false 00:37:05.027 }, 00:37:05.027 "method": "bdev_nvme_attach_controller" 00:37:05.027 }' 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:05.027 10:45:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:05.286 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:05.286 fio-3.35 00:37:05.286 Starting 1 thread 00:37:05.286 EAL: No free 2048 kB hugepages reported on node 1 00:37:17.496 00:37:17.496 filename0: (groupid=0, jobs=1): err= 0: pid=2644974: Sun Jul 14 10:46:00 2024 00:37:17.496 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10011msec) 00:37:17.496 slat (nsec): min=5566, max=30995, avg=6377.37, stdev=1741.70 00:37:17.496 clat (usec): min=40828, max=45533, avg=41009.02, stdev=308.45 00:37:17.496 lat (usec): min=40834, max=45558, avg=41015.40, stdev=308.89 00:37:17.496 clat percentiles (usec): 00:37:17.496 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:37:17.496 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:17.496 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:17.496 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:37:17.496 | 99.99th=[45351] 00:37:17.496 bw ( KiB/s): min= 384, max= 416, per=99.49%, avg=388.80, stdev=11.72, samples=20 00:37:17.496 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:37:17.496 lat (msec) : 50=100.00% 00:37:17.496 cpu : usr=94.16%, sys=5.58%, ctx=59, majf=0, minf=169 00:37:17.496 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:17.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.496 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.496 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:17.496 00:37:17.496 Run status group 0 (all jobs): 00:37:17.496 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10011-10011msec 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:17.496 00:37:17.496 real 0m11.074s 00:37:17.496 user 0m15.733s 00:37:17.496 sys 0m0.848s 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:17.496 ************************************ 00:37:17.496 END TEST fio_dif_1_default 00:37:17.496 ************************************ 00:37:17.496 10:46:00 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:37:17.496 10:46:00 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:37:17.496 10:46:00 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:17.496 10:46:00 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:17.496 10:46:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:17.496 ************************************ 00:37:17.496 START TEST fio_dif_1_multi_subsystems 00:37:17.496 ************************************ 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:17.496 bdev_null0 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:17.496 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:17.497 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:17.497 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:17.497 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:17.497 [2024-07-14 10:46:00.974895] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:17.497 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:17.497 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:17.497 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:37:17.497 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:37:17.497 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:17.497 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:17.497 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:17.497 bdev_null1 00:37:17.497 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:17.497 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:17.497 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:17.497 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:17.497 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:17.497 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:17.497 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:17.497 10:46:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:17.497 { 00:37:17.497 "params": { 00:37:17.497 "name": "Nvme$subsystem", 00:37:17.497 "trtype": "$TEST_TRANSPORT", 00:37:17.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:17.497 "adrfam": "ipv4", 00:37:17.497 "trsvcid": "$NVMF_PORT", 00:37:17.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:17.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:17.497 "hdgst": ${hdgst:-false}, 00:37:17.497 "ddgst": ${ddgst:-false} 00:37:17.497 }, 00:37:17.497 "method": "bdev_nvme_attach_controller" 00:37:17.497 } 00:37:17.497 EOF 00:37:17.497 )") 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:17.497 { 00:37:17.497 "params": { 00:37:17.497 "name": "Nvme$subsystem", 00:37:17.497 "trtype": "$TEST_TRANSPORT", 00:37:17.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:17.497 "adrfam": "ipv4", 00:37:17.497 "trsvcid": "$NVMF_PORT", 00:37:17.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:17.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:17.497 "hdgst": ${hdgst:-false}, 00:37:17.497 "ddgst": ${ddgst:-false} 00:37:17.497 }, 00:37:17.497 "method": "bdev_nvme_attach_controller" 00:37:17.497 } 00:37:17.497 EOF 00:37:17.497 )") 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:17.497 "params": { 00:37:17.497 "name": "Nvme0", 00:37:17.497 "trtype": "tcp", 00:37:17.497 "traddr": "10.0.0.2", 00:37:17.497 "adrfam": "ipv4", 00:37:17.497 "trsvcid": "4420", 00:37:17.497 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:17.497 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:17.497 "hdgst": false, 00:37:17.497 "ddgst": false 00:37:17.497 }, 00:37:17.497 "method": "bdev_nvme_attach_controller" 00:37:17.497 },{ 00:37:17.497 "params": { 00:37:17.497 "name": "Nvme1", 00:37:17.497 "trtype": "tcp", 00:37:17.497 "traddr": "10.0.0.2", 00:37:17.497 "adrfam": "ipv4", 00:37:17.497 "trsvcid": "4420", 00:37:17.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:17.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:17.497 "hdgst": false, 00:37:17.497 "ddgst": false 00:37:17.497 }, 00:37:17.497 "method": "bdev_nvme_attach_controller" 00:37:17.497 }' 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:17.497 10:46:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:17.497 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:17.497 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:17.497 fio-3.35 00:37:17.497 Starting 2 threads 00:37:17.497 EAL: No free 2048 kB hugepages reported on node 1 00:37:27.511 00:37:27.511 filename0: (groupid=0, jobs=1): err= 0: pid=2646937: Sun Jul 14 10:46:11 2024 00:37:27.511 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10008msec) 00:37:27.511 slat (nsec): min=6003, max=30448, avg=7830.14, stdev=2712.81 00:37:27.511 clat (usec): min=40830, max=42036, avg=40990.57, stdev=123.43 00:37:27.511 lat (usec): min=40836, max=42058, avg=40998.40, stdev=123.77 00:37:27.511 clat percentiles (usec): 00:37:27.511 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:37:27.511 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:27.511 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:27.511 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:27.511 | 99.99th=[42206] 00:37:27.511 bw ( KiB/s): min= 384, max= 416, per=49.74%, avg=388.80, stdev=11.72, samples=20 00:37:27.511 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:37:27.511 lat (msec) : 50=100.00% 00:37:27.511 cpu : usr=97.94%, sys=1.81%, ctx=10, majf=0, minf=176 00:37:27.511 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:27.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.511 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.511 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:27.511 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:27.511 filename1: (groupid=0, jobs=1): err= 0: pid=2646938: Sun Jul 14 10:46:11 2024 00:37:27.511 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10010msec) 00:37:27.511 slat (nsec): min=6019, max=27572, avg=7844.69, stdev=2687.42 00:37:27.511 clat (usec): min=40819, max=42223, avg=40998.33, stdev=153.99 00:37:27.511 lat (usec): min=40826, max=42249, avg=41006.18, stdev=154.60 00:37:27.511 clat percentiles (usec): 00:37:27.511 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:37:27.511 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:27.511 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:27.511 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:27.511 | 99.99th=[42206] 00:37:27.511 bw ( KiB/s): min= 384, max= 416, per=49.74%, avg=388.80, stdev=11.72, samples=20 00:37:27.511 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:37:27.511 lat (msec) : 50=100.00% 00:37:27.511 cpu : usr=97.83%, sys=1.92%, ctx=6, majf=0, minf=68 00:37:27.511 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:27.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.511 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.511 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:27.511 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:27.511 00:37:27.511 Run status group 0 (all jobs): 00:37:27.511 READ: bw=780KiB/s (799kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10008-10010msec 00:37:27.511 10:46:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:37:27.511 10:46:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:37:27.511 10:46:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:27.511 10:46:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:27.511 10:46:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:37:27.511 10:46:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:27.511 10:46:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.511 10:46:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:27.511 10:46:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.511 10:46:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:27.511 10:46:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.511 10:46:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:27.511 10:46:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.511 10:46:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:27.511 10:46:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:27.511 10:46:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:37:27.511 10:46:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:27.511 10:46:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.511 10:46:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:27.511 10:46:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.511 10:46:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:27.511 10:46:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.511 10:46:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:27.511 10:46:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.511 00:37:27.511 real 0m11.229s 00:37:27.511 user 0m26.439s 00:37:27.511 sys 0m0.705s 00:37:27.511 10:46:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:27.511 10:46:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:27.511 ************************************ 00:37:27.511 END TEST fio_dif_1_multi_subsystems 00:37:27.511 ************************************ 00:37:27.511 10:46:12 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:37:27.511 10:46:12 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:37:27.511 10:46:12 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:27.511 10:46:12 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:27.511 10:46:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:27.511 ************************************ 00:37:27.511 START TEST fio_dif_rand_params 00:37:27.511 ************************************ 00:37:27.511 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:37:27.511 10:46:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:37:27.511 10:46:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:37:27.511 10:46:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:37:27.511 10:46:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:37:27.511 10:46:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:37:27.511 10:46:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:27.512 bdev_null0 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:27.512 [2024-07-14 10:46:12.273155] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:27.512 { 00:37:27.512 "params": { 00:37:27.512 "name": "Nvme$subsystem", 00:37:27.512 "trtype": "$TEST_TRANSPORT", 00:37:27.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:27.512 "adrfam": "ipv4", 00:37:27.512 "trsvcid": "$NVMF_PORT", 00:37:27.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:27.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:27.512 "hdgst": ${hdgst:-false}, 00:37:27.512 "ddgst": ${ddgst:-false} 00:37:27.512 }, 00:37:27.512 "method": "bdev_nvme_attach_controller" 00:37:27.512 } 00:37:27.512 EOF 00:37:27.512 )") 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:27.512 "params": { 00:37:27.512 "name": "Nvme0", 00:37:27.512 "trtype": "tcp", 00:37:27.512 "traddr": "10.0.0.2", 00:37:27.512 "adrfam": "ipv4", 00:37:27.512 "trsvcid": "4420", 00:37:27.512 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:27.512 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:27.512 "hdgst": false, 00:37:27.512 "ddgst": false 00:37:27.512 }, 00:37:27.512 "method": "bdev_nvme_attach_controller" 00:37:27.512 }' 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:27.512 10:46:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:27.770 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:27.770 ... 00:37:27.770 fio-3.35 00:37:27.770 Starting 3 threads 00:37:27.770 EAL: No free 2048 kB hugepages reported on node 1 00:37:34.331 00:37:34.331 filename0: (groupid=0, jobs=1): err= 0: pid=2648899: Sun Jul 14 10:46:18 2024 00:37:34.331 read: IOPS=292, BW=36.5MiB/s (38.3MB/s)(183MiB/5005msec) 00:37:34.331 slat (nsec): min=6276, max=26245, avg=10621.69, stdev=2486.25 00:37:34.331 clat (usec): min=3436, max=50456, avg=10249.17, stdev=9240.22 00:37:34.331 lat (usec): min=3442, max=50468, avg=10259.79, stdev=9240.25 00:37:34.331 clat percentiles (usec): 00:37:34.331 | 1.00th=[ 3851], 5.00th=[ 4424], 10.00th=[ 5735], 20.00th=[ 6652], 00:37:34.331 | 30.00th=[ 7373], 40.00th=[ 7963], 50.00th=[ 8455], 60.00th=[ 8848], 00:37:34.331 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[10683], 95.00th=[46924], 00:37:34.331 | 99.00th=[49021], 99.50th=[49546], 99.90th=[50070], 99.95th=[50594], 00:37:34.331 | 99.99th=[50594] 00:37:34.331 bw ( KiB/s): min=25600, max=47872, per=32.41%, avg=37401.60, stdev=7012.35, samples=10 00:37:34.331 iops : min= 200, max= 374, avg=292.20, stdev=54.78, samples=10 00:37:34.331 lat (msec) : 4=1.85%, 10=79.15%, 20=13.67%, 50=5.13%, 100=0.21% 00:37:34.331 cpu : usr=95.32%, sys=4.38%, ctx=10, majf=0, minf=75 00:37:34.331 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:34.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.331 issued rwts: total=1463,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.331 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:34.331 filename0: (groupid=0, jobs=1): err= 0: pid=2648900: Sun Jul 14 10:46:18 2024 00:37:34.331 read: IOPS=310, BW=38.8MiB/s (40.7MB/s)(196MiB/5044msec) 00:37:34.331 slat (nsec): min=6417, max=25462, avg=10920.85, stdev=2460.05 00:37:34.331 clat (usec): min=3642, max=89392, avg=9623.41, stdev=7789.75 00:37:34.331 lat (usec): min=3650, max=89405, avg=9634.33, stdev=7789.99 00:37:34.331 clat percentiles (usec): 00:37:34.331 | 1.00th=[ 3785], 5.00th=[ 4080], 10.00th=[ 5080], 20.00th=[ 6325], 00:37:34.331 | 30.00th=[ 6980], 40.00th=[ 7898], 50.00th=[ 8717], 60.00th=[ 9372], 00:37:34.331 | 70.00th=[ 9765], 80.00th=[10421], 90.00th=[11338], 95.00th=[12125], 00:37:34.331 | 99.00th=[48497], 99.50th=[49546], 99.90th=[89654], 99.95th=[89654], 00:37:34.331 | 99.99th=[89654] 00:37:34.331 bw ( KiB/s): min=27648, max=49664, per=34.70%, avg=40038.40, stdev=7646.93, samples=10 00:37:34.331 iops : min= 216, max= 388, avg=312.80, stdev=59.74, samples=10 00:37:34.331 lat (msec) : 4=4.60%, 10=68.14%, 20=24.01%, 50=2.81%, 100=0.45% 00:37:34.331 cpu : usr=94.86%, sys=4.84%, ctx=9, majf=0, minf=75 00:37:34.331 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:34.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.331 issued rwts: total=1566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.332 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:34.332 filename0: (groupid=0, jobs=1): err= 0: pid=2648901: Sun Jul 14 10:46:18 2024 00:37:34.332 read: IOPS=301, BW=37.6MiB/s (39.5MB/s)(190MiB/5045msec) 00:37:34.332 slat (nsec): min=6322, max=25784, avg=10739.35, stdev=2464.36 00:37:34.332 clat (usec): min=3395, max=51056, avg=9923.45, stdev=8081.02 00:37:34.332 lat (usec): min=3402, max=51068, avg=9934.18, stdev=8081.04 00:37:34.332 clat percentiles (usec): 00:37:34.332 | 1.00th=[ 3785], 5.00th=[ 4555], 10.00th=[ 5735], 20.00th=[ 6390], 00:37:34.332 | 30.00th=[ 6980], 40.00th=[ 8094], 50.00th=[ 8717], 60.00th=[ 9241], 00:37:34.332 | 70.00th=[ 9765], 80.00th=[10421], 90.00th=[11338], 95.00th=[12387], 00:37:34.332 | 99.00th=[49021], 99.50th=[50070], 99.90th=[51119], 99.95th=[51119], 00:37:34.332 | 99.99th=[51119] 00:37:34.332 bw ( KiB/s): min=28672, max=47872, per=33.66%, avg=38835.20, stdev=6193.64, samples=10 00:37:34.332 iops : min= 224, max= 374, avg=303.40, stdev=48.39, samples=10 00:37:34.332 lat (msec) : 4=2.83%, 10=71.30%, 20=21.79%, 50=3.55%, 100=0.53% 00:37:34.332 cpu : usr=95.18%, sys=4.54%, ctx=8, majf=0, minf=95 00:37:34.332 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:34.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.332 issued rwts: total=1519,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.332 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:34.332 00:37:34.332 Run status group 0 (all jobs): 00:37:34.332 READ: bw=113MiB/s (118MB/s), 36.5MiB/s-38.8MiB/s (38.3MB/s-40.7MB/s), io=569MiB (596MB), run=5005-5045msec 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.332 bdev_null0 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.332 [2024-07-14 10:46:18.459687] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.332 bdev_null1 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.332 bdev_null2 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:34.332 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:34.333 { 00:37:34.333 "params": { 00:37:34.333 "name": "Nvme$subsystem", 00:37:34.333 "trtype": "$TEST_TRANSPORT", 00:37:34.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:34.333 "adrfam": "ipv4", 00:37:34.333 "trsvcid": "$NVMF_PORT", 00:37:34.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:34.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:34.333 "hdgst": ${hdgst:-false}, 00:37:34.333 "ddgst": ${ddgst:-false} 00:37:34.333 }, 00:37:34.333 "method": "bdev_nvme_attach_controller" 00:37:34.333 } 00:37:34.333 EOF 00:37:34.333 )") 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:34.333 { 00:37:34.333 "params": { 00:37:34.333 "name": "Nvme$subsystem", 00:37:34.333 "trtype": "$TEST_TRANSPORT", 00:37:34.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:34.333 "adrfam": "ipv4", 00:37:34.333 "trsvcid": "$NVMF_PORT", 00:37:34.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:34.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:34.333 "hdgst": ${hdgst:-false}, 00:37:34.333 "ddgst": ${ddgst:-false} 00:37:34.333 }, 00:37:34.333 "method": "bdev_nvme_attach_controller" 00:37:34.333 } 00:37:34.333 EOF 00:37:34.333 )") 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:34.333 { 00:37:34.333 "params": { 00:37:34.333 "name": "Nvme$subsystem", 00:37:34.333 "trtype": "$TEST_TRANSPORT", 00:37:34.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:34.333 "adrfam": "ipv4", 00:37:34.333 "trsvcid": "$NVMF_PORT", 00:37:34.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:34.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:34.333 "hdgst": ${hdgst:-false}, 00:37:34.333 "ddgst": ${ddgst:-false} 00:37:34.333 }, 00:37:34.333 "method": "bdev_nvme_attach_controller" 00:37:34.333 } 00:37:34.333 EOF 00:37:34.333 )") 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:34.333 "params": { 00:37:34.333 "name": "Nvme0", 00:37:34.333 "trtype": "tcp", 00:37:34.333 "traddr": "10.0.0.2", 00:37:34.333 "adrfam": "ipv4", 00:37:34.333 "trsvcid": "4420", 00:37:34.333 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:34.333 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:34.333 "hdgst": false, 00:37:34.333 "ddgst": false 00:37:34.333 }, 00:37:34.333 "method": "bdev_nvme_attach_controller" 00:37:34.333 },{ 00:37:34.333 "params": { 00:37:34.333 "name": "Nvme1", 00:37:34.333 "trtype": "tcp", 00:37:34.333 "traddr": "10.0.0.2", 00:37:34.333 "adrfam": "ipv4", 00:37:34.333 "trsvcid": "4420", 00:37:34.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:34.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:34.333 "hdgst": false, 00:37:34.333 "ddgst": false 00:37:34.333 }, 00:37:34.333 "method": "bdev_nvme_attach_controller" 00:37:34.333 },{ 00:37:34.333 "params": { 00:37:34.333 "name": "Nvme2", 00:37:34.333 "trtype": "tcp", 00:37:34.333 "traddr": "10.0.0.2", 00:37:34.333 "adrfam": "ipv4", 00:37:34.333 "trsvcid": "4420", 00:37:34.333 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:34.333 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:34.333 "hdgst": false, 00:37:34.333 "ddgst": false 00:37:34.333 }, 00:37:34.333 "method": "bdev_nvme_attach_controller" 00:37:34.333 }' 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:34.333 10:46:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:34.333 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:34.333 ... 00:37:34.333 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:34.333 ... 00:37:34.334 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:34.334 ... 00:37:34.334 fio-3.35 00:37:34.334 Starting 24 threads 00:37:34.334 EAL: No free 2048 kB hugepages reported on node 1 00:37:46.541 00:37:46.541 filename0: (groupid=0, jobs=1): err= 0: pid=2649947: Sun Jul 14 10:46:29 2024 00:37:46.541 read: IOPS=70, BW=282KiB/s (289kB/s)(2864KiB/10150msec) 00:37:46.541 slat (nsec): min=6821, max=25006, avg=9136.14, stdev=2873.01 00:37:46.541 clat (msec): min=51, max=360, avg=225.63, stdev=50.20 00:37:46.541 lat (msec): min=51, max=360, avg=225.64, stdev=50.20 00:37:46.541 clat percentiles (msec): 00:37:46.541 | 1.00th=[ 52], 5.00th=[ 153], 10.00th=[ 176], 20.00th=[ 213], 00:37:46.541 | 30.00th=[ 228], 40.00th=[ 232], 50.00th=[ 234], 60.00th=[ 234], 00:37:46.541 | 70.00th=[ 236], 80.00th=[ 236], 90.00th=[ 279], 95.00th=[ 317], 00:37:46.541 | 99.00th=[ 359], 99.50th=[ 359], 99.90th=[ 359], 99.95th=[ 359], 00:37:46.541 | 99.99th=[ 359] 00:37:46.541 bw ( KiB/s): min= 224, max= 496, per=4.44%, avg=280.00, stdev=63.89, samples=20 00:37:46.541 iops : min= 56, max= 124, avg=70.00, stdev=15.97, samples=20 00:37:46.541 lat (msec) : 100=4.19%, 250=83.24%, 500=12.57% 00:37:46.541 cpu : usr=98.99%, sys=0.63%, ctx=8, majf=0, minf=37 00:37:46.541 IO depths : 1=0.1%, 2=1.0%, 4=8.2%, 8=77.9%, 16=12.7%, 32=0.0%, >=64=0.0% 00:37:46.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.541 complete : 0=0.0%, 4=89.2%, 8=5.7%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.541 issued rwts: total=716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.541 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:46.541 filename0: (groupid=0, jobs=1): err= 0: pid=2649948: Sun Jul 14 10:46:29 2024 00:37:46.541 read: IOPS=67, BW=271KiB/s (277kB/s)(2744KiB/10130msec) 00:37:46.541 slat (nsec): min=6830, max=63549, avg=11071.27, stdev=7005.27 00:37:46.541 clat (msec): min=161, max=361, avg=235.50, stdev=32.08 00:37:46.541 lat (msec): min=161, max=361, avg=235.51, stdev=32.08 00:37:46.541 clat percentiles (msec): 00:37:46.541 | 1.00th=[ 182], 5.00th=[ 194], 10.00th=[ 203], 20.00th=[ 226], 00:37:46.541 | 30.00th=[ 230], 40.00th=[ 232], 50.00th=[ 234], 60.00th=[ 234], 00:37:46.541 | 70.00th=[ 236], 80.00th=[ 236], 90.00th=[ 275], 95.00th=[ 317], 00:37:46.541 | 99.00th=[ 359], 99.50th=[ 363], 99.90th=[ 363], 99.95th=[ 363], 00:37:46.541 | 99.99th=[ 363] 00:37:46.541 bw ( KiB/s): min= 128, max= 368, per=4.26%, avg=268.00, stdev=50.30, samples=20 00:37:46.541 iops : min= 32, max= 92, avg=67.00, stdev=12.57, samples=20 00:37:46.541 lat (msec) : 250=88.92%, 500=11.08% 00:37:46.541 cpu : usr=99.09%, sys=0.52%, ctx=54, majf=0, minf=46 00:37:46.541 IO depths : 1=0.4%, 2=1.6%, 4=9.5%, 8=76.2%, 16=12.2%, 32=0.0%, >=64=0.0% 00:37:46.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.541 complete : 0=0.0%, 4=89.6%, 8=5.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.541 issued rwts: total=686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.541 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:46.541 filename0: (groupid=0, jobs=1): err= 0: pid=2649949: Sun Jul 14 10:46:29 2024 00:37:46.541 read: IOPS=70, BW=282KiB/s (289kB/s)(2864KiB/10147msec) 00:37:46.541 slat (nsec): min=6808, max=63906, avg=11617.17, stdev=8093.47 00:37:46.541 clat (msec): min=66, max=390, avg=226.16, stdev=43.57 00:37:46.541 lat (msec): min=66, max=390, avg=226.17, stdev=43.57 00:37:46.541 clat percentiles (msec): 00:37:46.541 | 1.00th=[ 67], 5.00th=[ 118], 10.00th=[ 203], 20.00th=[ 226], 00:37:46.541 | 30.00th=[ 230], 40.00th=[ 232], 50.00th=[ 234], 60.00th=[ 234], 00:37:46.541 | 70.00th=[ 236], 80.00th=[ 236], 90.00th=[ 239], 95.00th=[ 279], 00:37:46.541 | 99.00th=[ 363], 99.50th=[ 363], 99.90th=[ 393], 99.95th=[ 393], 00:37:46.541 | 99.99th=[ 393] 00:37:46.541 bw ( KiB/s): min= 192, max= 384, per=4.46%, avg=280.00, stdev=49.66, samples=20 00:37:46.541 iops : min= 48, max= 96, avg=70.00, stdev=12.41, samples=20 00:37:46.541 lat (msec) : 100=4.19%, 250=89.66%, 500=6.15% 00:37:46.541 cpu : usr=99.03%, sys=0.60%, ctx=11, majf=0, minf=59 00:37:46.541 IO depths : 1=0.4%, 2=2.4%, 4=12.0%, 8=73.0%, 16=12.2%, 32=0.0%, >=64=0.0% 00:37:46.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.541 complete : 0=0.0%, 4=90.4%, 8=4.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.541 issued rwts: total=716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.541 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:46.541 filename0: (groupid=0, jobs=1): err= 0: pid=2649950: Sun Jul 14 10:46:29 2024 00:37:46.541 read: IOPS=67, BW=271KiB/s (277kB/s)(2736KiB/10109msec) 00:37:46.541 slat (nsec): min=6354, max=24863, avg=8583.28, stdev=2050.18 00:37:46.541 clat (msec): min=191, max=445, avg=235.56, stdev=35.94 00:37:46.541 lat (msec): min=191, max=445, avg=235.57, stdev=35.94 00:37:46.541 clat percentiles (msec): 00:37:46.541 | 1.00th=[ 194], 5.00th=[ 203], 10.00th=[ 213], 20.00th=[ 224], 00:37:46.541 | 30.00th=[ 230], 40.00th=[ 232], 50.00th=[ 234], 60.00th=[ 234], 00:37:46.541 | 70.00th=[ 236], 80.00th=[ 236], 90.00th=[ 236], 95.00th=[ 262], 00:37:46.541 | 99.00th=[ 447], 99.50th=[ 447], 99.90th=[ 447], 99.95th=[ 447], 00:37:46.541 | 99.99th=[ 447] 00:37:46.541 bw ( KiB/s): min= 128, max= 336, per=4.25%, avg=267.20, stdev=46.46, samples=20 00:37:46.541 iops : min= 32, max= 84, avg=66.80, stdev=11.61, samples=20 00:37:46.541 lat (msec) : 250=94.74%, 500=5.26% 00:37:46.541 cpu : usr=98.88%, sys=0.75%, ctx=14, majf=0, minf=34 00:37:46.541 IO depths : 1=0.6%, 2=1.6%, 4=9.2%, 8=76.6%, 16=12.0%, 32=0.0%, >=64=0.0% 00:37:46.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.541 complete : 0=0.0%, 4=89.6%, 8=5.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.541 issued rwts: total=684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.541 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:46.541 filename0: (groupid=0, jobs=1): err= 0: pid=2649951: Sun Jul 14 10:46:29 2024 00:37:46.541 read: IOPS=67, BW=269KiB/s (275kB/s)(2720KiB/10129msec) 00:37:46.541 slat (nsec): min=6818, max=33526, avg=9274.88, stdev=3097.78 00:37:46.541 clat (msec): min=164, max=362, avg=237.62, stdev=39.33 00:37:46.541 lat (msec): min=164, max=362, avg=237.63, stdev=39.33 00:37:46.541 clat percentiles (msec): 00:37:46.541 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 199], 20.00th=[ 205], 00:37:46.541 | 30.00th=[ 226], 40.00th=[ 230], 50.00th=[ 232], 60.00th=[ 234], 00:37:46.541 | 70.00th=[ 236], 80.00th=[ 264], 90.00th=[ 284], 95.00th=[ 342], 00:37:46.541 | 99.00th=[ 359], 99.50th=[ 363], 99.90th=[ 363], 99.95th=[ 363], 00:37:46.541 | 99.99th=[ 363] 00:37:46.541 bw ( KiB/s): min= 224, max= 336, per=4.22%, avg=265.60, stdev=36.85, samples=20 00:37:46.541 iops : min= 56, max= 84, avg=66.40, stdev= 9.21, samples=20 00:37:46.541 lat (msec) : 250=78.82%, 500=21.18% 00:37:46.541 cpu : usr=99.09%, sys=0.54%, ctx=7, majf=0, minf=35 00:37:46.541 IO depths : 1=0.3%, 2=1.3%, 4=8.7%, 8=77.1%, 16=12.6%, 32=0.0%, >=64=0.0% 00:37:46.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.541 complete : 0=0.0%, 4=89.3%, 8=5.7%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.541 issued rwts: total=680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.541 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:46.541 filename0: (groupid=0, jobs=1): err= 0: pid=2649952: Sun Jul 14 10:46:29 2024 00:37:46.541 read: IOPS=67, BW=269KiB/s (275kB/s)(2720KiB/10116msec) 00:37:46.541 slat (nsec): min=6806, max=41662, avg=9071.84, stdev=2982.20 00:37:46.541 clat (msec): min=179, max=498, avg=237.35, stdev=41.84 00:37:46.541 lat (msec): min=179, max=498, avg=237.36, stdev=41.84 00:37:46.541 clat percentiles (msec): 00:37:46.541 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 205], 20.00th=[ 226], 00:37:46.541 | 30.00th=[ 230], 40.00th=[ 232], 50.00th=[ 234], 60.00th=[ 236], 00:37:46.541 | 70.00th=[ 236], 80.00th=[ 236], 90.00th=[ 239], 95.00th=[ 268], 00:37:46.541 | 99.00th=[ 451], 99.50th=[ 451], 99.90th=[ 498], 99.95th=[ 498], 00:37:46.541 | 99.99th=[ 498] 00:37:46.542 bw ( KiB/s): min= 128, max= 336, per=4.22%, avg=265.60, stdev=47.41, samples=20 00:37:46.542 iops : min= 32, max= 84, avg=66.40, stdev=11.85, samples=20 00:37:46.542 lat (msec) : 250=90.59%, 500=9.41% 00:37:46.542 cpu : usr=99.01%, sys=0.62%, ctx=8, majf=0, minf=72 00:37:46.542 IO depths : 1=0.1%, 2=0.4%, 4=6.9%, 8=80.0%, 16=12.5%, 32=0.0%, >=64=0.0% 00:37:46.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.542 complete : 0=0.0%, 4=88.9%, 8=5.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.542 issued rwts: total=680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.542 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:46.542 filename0: (groupid=0, jobs=1): err= 0: pid=2649953: Sun Jul 14 10:46:29 2024 00:37:46.542 read: IOPS=69, BW=276KiB/s (283kB/s)(2800KiB/10128msec) 00:37:46.542 slat (nsec): min=6425, max=55899, avg=12393.33, stdev=7920.00 00:37:46.542 clat (msec): min=128, max=381, avg=230.86, stdev=28.91 00:37:46.542 lat (msec): min=128, max=381, avg=230.87, stdev=28.91 00:37:46.542 clat percentiles (msec): 00:37:46.542 | 1.00th=[ 129], 5.00th=[ 188], 10.00th=[ 207], 20.00th=[ 226], 00:37:46.542 | 30.00th=[ 230], 40.00th=[ 232], 50.00th=[ 234], 60.00th=[ 234], 00:37:46.542 | 70.00th=[ 236], 80.00th=[ 236], 90.00th=[ 236], 95.00th=[ 239], 00:37:46.542 | 99.00th=[ 363], 99.50th=[ 363], 99.90th=[ 380], 99.95th=[ 380], 00:37:46.542 | 99.99th=[ 380] 00:37:46.542 bw ( KiB/s): min= 224, max= 368, per=4.34%, avg=273.60, stdev=36.67, samples=20 00:37:46.542 iops : min= 56, max= 92, avg=68.40, stdev= 9.17, samples=20 00:37:46.542 lat (msec) : 250=95.71%, 500=4.29% 00:37:46.542 cpu : usr=99.18%, sys=0.44%, ctx=18, majf=0, minf=43 00:37:46.542 IO depths : 1=0.6%, 2=2.4%, 4=11.7%, 8=73.3%, 16=12.0%, 32=0.0%, >=64=0.0% 00:37:46.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.542 complete : 0=0.0%, 4=90.3%, 8=4.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.542 issued rwts: total=700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.542 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:46.542 filename0: (groupid=0, jobs=1): err= 0: pid=2649954: Sun Jul 14 10:46:29 2024 00:37:46.542 read: IOPS=63, BW=255KiB/s (261kB/s)(2576KiB/10101msec) 00:37:46.542 slat (nsec): min=4480, max=21574, avg=8651.89, stdev=2371.51 00:37:46.542 clat (msec): min=178, max=505, avg=250.17, stdev=58.97 00:37:46.542 lat (msec): min=178, max=505, avg=250.18, stdev=58.97 00:37:46.542 clat percentiles (msec): 00:37:46.542 | 1.00th=[ 194], 5.00th=[ 203], 10.00th=[ 213], 20.00th=[ 224], 00:37:46.542 | 30.00th=[ 230], 40.00th=[ 234], 50.00th=[ 234], 60.00th=[ 236], 00:37:46.542 | 70.00th=[ 236], 80.00th=[ 239], 90.00th=[ 334], 95.00th=[ 372], 00:37:46.542 | 99.00th=[ 506], 99.50th=[ 506], 99.90th=[ 506], 99.95th=[ 506], 00:37:46.542 | 99.99th=[ 506] 00:37:46.542 bw ( KiB/s): min= 128, max= 384, per=4.20%, avg=264.42, stdev=53.70, samples=19 00:37:46.542 iops : min= 32, max= 96, avg=66.11, stdev=13.42, samples=19 00:37:46.542 lat (msec) : 250=81.99%, 500=15.53%, 750=2.48% 00:37:46.542 cpu : usr=99.03%, sys=0.60%, ctx=7, majf=0, minf=36 00:37:46.542 IO depths : 1=1.1%, 2=2.6%, 4=10.6%, 8=74.1%, 16=11.6%, 32=0.0%, >=64=0.0% 00:37:46.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.542 complete : 0=0.0%, 4=89.9%, 8=4.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.542 issued rwts: total=644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.542 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:46.542 filename1: (groupid=0, jobs=1): err= 0: pid=2649955: Sun Jul 14 10:46:29 2024 00:37:46.542 read: IOPS=67, BW=269KiB/s (275kB/s)(2720KiB/10112msec) 00:37:46.542 slat (nsec): min=6827, max=43031, avg=8916.34, stdev=2773.83 00:37:46.542 clat (msec): min=160, max=447, avg=237.83, stdev=42.03 00:37:46.542 lat (msec): min=160, max=447, avg=237.84, stdev=42.03 00:37:46.542 clat percentiles (msec): 00:37:46.542 | 1.00th=[ 161], 5.00th=[ 190], 10.00th=[ 205], 20.00th=[ 224], 00:37:46.542 | 30.00th=[ 230], 40.00th=[ 232], 50.00th=[ 234], 60.00th=[ 236], 00:37:46.542 | 70.00th=[ 236], 80.00th=[ 236], 90.00th=[ 245], 95.00th=[ 309], 00:37:46.542 | 99.00th=[ 447], 99.50th=[ 447], 99.90th=[ 447], 99.95th=[ 447], 00:37:46.542 | 99.99th=[ 447] 00:37:46.542 bw ( KiB/s): min= 128, max= 384, per=4.22%, avg=265.60, stdev=59.28, samples=20 00:37:46.542 iops : min= 32, max= 96, avg=66.40, stdev=14.82, samples=20 00:37:46.542 lat (msec) : 250=90.29%, 500=9.71% 00:37:46.542 cpu : usr=99.13%, sys=0.47%, ctx=14, majf=0, minf=37 00:37:46.542 IO depths : 1=4.1%, 2=8.8%, 4=20.1%, 8=58.4%, 16=8.5%, 32=0.0%, >=64=0.0% 00:37:46.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.542 complete : 0=0.0%, 4=92.7%, 8=1.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.542 issued rwts: total=680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.542 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:46.542 filename1: (groupid=0, jobs=1): err= 0: pid=2649956: Sun Jul 14 10:46:29 2024 00:37:46.542 read: IOPS=69, BW=277KiB/s (284kB/s)(2808KiB/10129msec) 00:37:46.542 slat (nsec): min=6945, max=72500, avg=16673.83, stdev=9796.13 00:37:46.542 clat (msec): min=128, max=332, avg=230.45, stdev=28.29 00:37:46.542 lat (msec): min=128, max=332, avg=230.46, stdev=28.29 00:37:46.542 clat percentiles (msec): 00:37:46.542 | 1.00th=[ 129], 5.00th=[ 188], 10.00th=[ 207], 20.00th=[ 226], 00:37:46.542 | 30.00th=[ 230], 40.00th=[ 232], 50.00th=[ 234], 60.00th=[ 236], 00:37:46.542 | 70.00th=[ 236], 80.00th=[ 236], 90.00th=[ 239], 95.00th=[ 245], 00:37:46.542 | 99.00th=[ 334], 99.50th=[ 334], 99.90th=[ 334], 99.95th=[ 334], 00:37:46.542 | 99.99th=[ 334] 00:37:46.542 bw ( KiB/s): min= 256, max= 368, per=4.36%, avg=274.40, stdev=40.63, samples=20 00:37:46.542 iops : min= 64, max= 92, avg=68.60, stdev=10.16, samples=20 00:37:46.542 lat (msec) : 250=95.16%, 500=4.84% 00:37:46.542 cpu : usr=98.87%, sys=0.75%, ctx=14, majf=0, minf=38 00:37:46.542 IO depths : 1=0.6%, 2=6.8%, 4=25.1%, 8=55.7%, 16=11.8%, 32=0.0%, >=64=0.0% 00:37:46.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.542 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.542 issued rwts: total=702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.542 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:46.542 filename1: (groupid=0, jobs=1): err= 0: pid=2649957: Sun Jul 14 10:46:29 2024 00:37:46.542 read: IOPS=67, BW=271KiB/s (277kB/s)(2744KiB/10129msec) 00:37:46.542 slat (nsec): min=6773, max=70831, avg=13100.18, stdev=6922.24 00:37:46.542 clat (msec): min=160, max=353, avg=235.99, stdev=27.85 00:37:46.542 lat (msec): min=161, max=353, avg=236.01, stdev=27.86 00:37:46.542 clat percentiles (msec): 00:37:46.542 | 1.00th=[ 161], 5.00th=[ 207], 10.00th=[ 218], 20.00th=[ 228], 00:37:46.542 | 30.00th=[ 230], 40.00th=[ 232], 50.00th=[ 234], 60.00th=[ 236], 00:37:46.542 | 70.00th=[ 236], 80.00th=[ 236], 90.00th=[ 239], 95.00th=[ 313], 00:37:46.542 | 99.00th=[ 338], 99.50th=[ 338], 99.90th=[ 355], 99.95th=[ 355], 00:37:46.542 | 99.99th=[ 355] 00:37:46.542 bw ( KiB/s): min= 128, max= 384, per=4.25%, avg=268.00, stdev=53.92, samples=20 00:37:46.542 iops : min= 32, max= 96, avg=67.00, stdev=13.48, samples=20 00:37:46.542 lat (msec) : 250=92.71%, 500=7.29% 00:37:46.542 cpu : usr=98.97%, sys=0.65%, ctx=14, majf=0, minf=32 00:37:46.542 IO depths : 1=2.3%, 2=8.6%, 4=25.1%, 8=53.9%, 16=10.1%, 32=0.0%, >=64=0.0% 00:37:46.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.542 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.542 issued rwts: total=686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.542 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:46.542 filename1: (groupid=0, jobs=1): err= 0: pid=2649958: Sun Jul 14 10:46:29 2024 00:37:46.542 read: IOPS=71, BW=285KiB/s (291kB/s)(2888KiB/10146msec) 00:37:46.542 slat (nsec): min=6540, max=35832, avg=9233.02, stdev=3501.65 00:37:46.542 clat (msec): min=55, max=414, avg=224.13, stdev=42.18 00:37:46.542 lat (msec): min=55, max=414, avg=224.13, stdev=42.18 00:37:46.542 clat percentiles (msec): 00:37:46.542 | 1.00th=[ 67], 5.00th=[ 118], 10.00th=[ 201], 20.00th=[ 222], 00:37:46.542 | 30.00th=[ 230], 40.00th=[ 232], 50.00th=[ 234], 60.00th=[ 234], 00:37:46.542 | 70.00th=[ 236], 80.00th=[ 236], 90.00th=[ 236], 95.00th=[ 259], 00:37:46.542 | 99.00th=[ 338], 99.50th=[ 338], 99.90th=[ 414], 99.95th=[ 414], 00:37:46.542 | 99.99th=[ 414] 00:37:46.542 bw ( KiB/s): min= 256, max= 384, per=4.49%, avg=282.40, stdev=37.53, samples=20 00:37:46.542 iops : min= 64, max= 96, avg=70.60, stdev= 9.38, samples=20 00:37:46.542 lat (msec) : 100=4.43%, 250=90.30%, 500=5.26% 00:37:46.542 cpu : usr=98.87%, sys=0.77%, ctx=13, majf=0, minf=44 00:37:46.542 IO depths : 1=0.4%, 2=1.1%, 4=8.2%, 8=78.1%, 16=12.2%, 32=0.0%, >=64=0.0% 00:37:46.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.542 complete : 0=0.0%, 4=89.3%, 8=5.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.542 issued rwts: total=722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.542 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:46.542 filename1: (groupid=0, jobs=1): err= 0: pid=2649959: Sun Jul 14 10:46:29 2024 00:37:46.542 read: IOPS=69, BW=277KiB/s (284kB/s)(2808KiB/10129msec) 00:37:46.542 slat (nsec): min=6905, max=71909, avg=13428.64, stdev=8540.45 00:37:46.542 clat (msec): min=128, max=394, avg=230.47, stdev=28.52 00:37:46.542 lat (msec): min=128, max=394, avg=230.49, stdev=28.52 00:37:46.542 clat percentiles (msec): 00:37:46.542 | 1.00th=[ 129], 5.00th=[ 188], 10.00th=[ 207], 20.00th=[ 228], 00:37:46.542 | 30.00th=[ 230], 40.00th=[ 230], 50.00th=[ 234], 60.00th=[ 236], 00:37:46.542 | 70.00th=[ 236], 80.00th=[ 236], 90.00th=[ 239], 95.00th=[ 239], 00:37:46.542 | 99.00th=[ 317], 99.50th=[ 317], 99.90th=[ 397], 99.95th=[ 397], 00:37:46.542 | 99.99th=[ 397] 00:37:46.542 bw ( KiB/s): min= 128, max= 384, per=4.36%, avg=274.40, stdev=59.70, samples=20 00:37:46.542 iops : min= 32, max= 96, avg=68.60, stdev=14.93, samples=20 00:37:46.542 lat (msec) : 250=95.44%, 500=4.56% 00:37:46.542 cpu : usr=98.53%, sys=1.10%, ctx=15, majf=0, minf=39 00:37:46.542 IO depths : 1=3.0%, 2=9.3%, 4=25.1%, 8=53.3%, 16=9.4%, 32=0.0%, >=64=0.0% 00:37:46.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.542 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.542 issued rwts: total=702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.542 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:46.542 filename1: (groupid=0, jobs=1): err= 0: pid=2649960: Sun Jul 14 10:46:29 2024 00:37:46.542 read: IOPS=65, BW=263KiB/s (269kB/s)(2656KiB/10100msec) 00:37:46.542 slat (nsec): min=4534, max=19882, avg=8678.78, stdev=2254.17 00:37:46.542 clat (msec): min=182, max=504, avg=242.85, stdev=50.46 00:37:46.542 lat (msec): min=182, max=504, avg=242.86, stdev=50.46 00:37:46.542 clat percentiles (msec): 00:37:46.542 | 1.00th=[ 184], 5.00th=[ 203], 10.00th=[ 211], 20.00th=[ 226], 00:37:46.542 | 30.00th=[ 230], 40.00th=[ 234], 50.00th=[ 234], 60.00th=[ 236], 00:37:46.542 | 70.00th=[ 236], 80.00th=[ 236], 90.00th=[ 253], 95.00th=[ 342], 00:37:46.542 | 99.00th=[ 506], 99.50th=[ 506], 99.90th=[ 506], 99.95th=[ 506], 00:37:46.542 | 99.99th=[ 506] 00:37:46.542 bw ( KiB/s): min= 176, max= 368, per=4.33%, avg=272.84, stdev=49.88, samples=19 00:37:46.542 iops : min= 44, max= 92, avg=68.21, stdev=12.47, samples=19 00:37:46.542 lat (msec) : 250=89.46%, 500=8.13%, 750=2.41% 00:37:46.543 cpu : usr=98.83%, sys=0.79%, ctx=7, majf=0, minf=41 00:37:46.543 IO depths : 1=0.9%, 2=2.1%, 4=9.6%, 8=75.6%, 16=11.7%, 32=0.0%, >=64=0.0% 00:37:46.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.543 complete : 0=0.0%, 4=89.6%, 8=5.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.543 issued rwts: total=664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.543 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:46.543 filename1: (groupid=0, jobs=1): err= 0: pid=2649961: Sun Jul 14 10:46:29 2024 00:37:46.543 read: IOPS=67, BW=269KiB/s (276kB/s)(2728KiB/10129msec) 00:37:46.543 slat (nsec): min=6818, max=64270, avg=11611.71, stdev=8512.95 00:37:46.543 clat (msec): min=178, max=362, avg=237.34, stdev=33.41 00:37:46.543 lat (msec): min=178, max=362, avg=237.35, stdev=33.41 00:37:46.543 clat percentiles (msec): 00:37:46.543 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 205], 20.00th=[ 226], 00:37:46.543 | 30.00th=[ 230], 40.00th=[ 232], 50.00th=[ 234], 60.00th=[ 234], 00:37:46.543 | 70.00th=[ 236], 80.00th=[ 236], 90.00th=[ 288], 95.00th=[ 317], 00:37:46.543 | 99.00th=[ 359], 99.50th=[ 363], 99.90th=[ 363], 99.95th=[ 363], 00:37:46.543 | 99.99th=[ 363] 00:37:46.543 bw ( KiB/s): min= 128, max= 368, per=4.23%, avg=266.40, stdev=52.50, samples=20 00:37:46.543 iops : min= 32, max= 92, avg=66.60, stdev=13.12, samples=20 00:37:46.543 lat (msec) : 250=86.80%, 500=13.20% 00:37:46.543 cpu : usr=98.96%, sys=0.67%, ctx=9, majf=0, minf=36 00:37:46.543 IO depths : 1=0.6%, 2=1.9%, 4=9.8%, 8=75.5%, 16=12.2%, 32=0.0%, >=64=0.0% 00:37:46.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.543 complete : 0=0.0%, 4=89.7%, 8=5.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.543 issued rwts: total=682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.543 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:46.543 filename1: (groupid=0, jobs=1): err= 0: pid=2649962: Sun Jul 14 10:46:29 2024 00:37:46.543 read: IOPS=71, BW=285KiB/s (291kB/s)(2888KiB/10150msec) 00:37:46.543 slat (nsec): min=6795, max=25826, avg=9080.70, stdev=2726.48 00:37:46.543 clat (msec): min=51, max=395, avg=224.13, stdev=42.54 00:37:46.543 lat (msec): min=51, max=395, avg=224.14, stdev=42.54 00:37:46.543 clat percentiles (msec): 00:37:46.543 | 1.00th=[ 52], 5.00th=[ 153], 10.00th=[ 201], 20.00th=[ 224], 00:37:46.543 | 30.00th=[ 230], 40.00th=[ 232], 50.00th=[ 234], 60.00th=[ 234], 00:37:46.543 | 70.00th=[ 236], 80.00th=[ 236], 90.00th=[ 236], 95.00th=[ 264], 00:37:46.543 | 99.00th=[ 326], 99.50th=[ 330], 99.90th=[ 397], 99.95th=[ 397], 00:37:46.543 | 99.99th=[ 397] 00:37:46.543 bw ( KiB/s): min= 256, max= 384, per=4.49%, avg=282.40, stdev=37.53, samples=20 00:37:46.543 iops : min= 64, max= 96, avg=70.60, stdev= 9.38, samples=20 00:37:46.543 lat (msec) : 100=4.43%, 250=90.30%, 500=5.26% 00:37:46.543 cpu : usr=98.89%, sys=0.74%, ctx=10, majf=0, minf=25 00:37:46.543 IO depths : 1=0.6%, 2=1.2%, 4=8.2%, 8=78.0%, 16=12.0%, 32=0.0%, >=64=0.0% 00:37:46.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.543 complete : 0=0.0%, 4=89.3%, 8=5.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.543 issued rwts: total=722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.543 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:46.543 filename2: (groupid=0, jobs=1): err= 0: pid=2649963: Sun Jul 14 10:46:29 2024 00:37:46.543 read: IOPS=66, BW=267KiB/s (273kB/s)(2696KiB/10105msec) 00:37:46.543 slat (nsec): min=6261, max=19318, avg=8684.14, stdev=2162.15 00:37:46.543 clat (msec): min=176, max=506, avg=239.01, stdev=39.08 00:37:46.543 lat (msec): min=176, max=506, avg=239.02, stdev=39.08 00:37:46.543 clat percentiles (msec): 00:37:46.543 | 1.00th=[ 190], 5.00th=[ 205], 10.00th=[ 220], 20.00th=[ 228], 00:37:46.543 | 30.00th=[ 232], 40.00th=[ 234], 50.00th=[ 234], 60.00th=[ 234], 00:37:46.543 | 70.00th=[ 236], 80.00th=[ 236], 90.00th=[ 239], 95.00th=[ 313], 00:37:46.543 | 99.00th=[ 443], 99.50th=[ 443], 99.90th=[ 506], 99.95th=[ 506], 00:37:46.543 | 99.99th=[ 506] 00:37:46.543 bw ( KiB/s): min= 112, max= 336, per=4.19%, avg=263.20, stdev=51.77, samples=20 00:37:46.543 iops : min= 28, max= 84, avg=65.80, stdev=12.94, samples=20 00:37:46.543 lat (msec) : 250=91.99%, 500=7.72%, 750=0.30% 00:37:46.543 cpu : usr=98.94%, sys=0.69%, ctx=13, majf=0, minf=34 00:37:46.543 IO depths : 1=0.1%, 2=0.7%, 4=7.9%, 8=78.8%, 16=12.5%, 32=0.0%, >=64=0.0% 00:37:46.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.543 complete : 0=0.0%, 4=89.2%, 8=5.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.543 issued rwts: total=674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.543 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:46.543 filename2: (groupid=0, jobs=1): err= 0: pid=2649964: Sun Jul 14 10:46:29 2024 00:37:46.543 read: IOPS=46, BW=184KiB/s (189kB/s)(1856KiB/10078msec) 00:37:46.543 slat (nsec): min=6851, max=32966, avg=8956.77, stdev=2761.11 00:37:46.543 clat (msec): min=194, max=504, avg=347.43, stdev=56.97 00:37:46.543 lat (msec): min=194, max=504, avg=347.44, stdev=56.97 00:37:46.543 clat percentiles (msec): 00:37:46.543 | 1.00th=[ 232], 5.00th=[ 236], 10.00th=[ 296], 20.00th=[ 317], 00:37:46.543 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 338], 60.00th=[ 368], 00:37:46.543 | 70.00th=[ 372], 80.00th=[ 372], 90.00th=[ 380], 95.00th=[ 472], 00:37:46.543 | 99.00th=[ 506], 99.50th=[ 506], 99.90th=[ 506], 99.95th=[ 506], 00:37:46.543 | 99.99th=[ 506] 00:37:46.543 bw ( KiB/s): min= 128, max= 256, per=2.99%, avg=188.63, stdev=57.58, samples=19 00:37:46.543 iops : min= 32, max= 64, avg=47.16, stdev=14.40, samples=19 00:37:46.543 lat (msec) : 250=9.05%, 500=87.50%, 750=3.45% 00:37:46.543 cpu : usr=99.10%, sys=0.53%, ctx=6, majf=0, minf=46 00:37:46.543 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:37:46.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.543 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.543 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.543 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:46.543 filename2: (groupid=0, jobs=1): err= 0: pid=2649965: Sun Jul 14 10:46:29 2024 00:37:46.543 read: IOPS=67, BW=268KiB/s (275kB/s)(2712KiB/10111msec) 00:37:46.543 slat (nsec): min=6817, max=27552, avg=9404.53, stdev=2977.94 00:37:46.543 clat (msec): min=160, max=502, avg=238.35, stdev=43.75 00:37:46.543 lat (msec): min=160, max=502, avg=238.36, stdev=43.75 00:37:46.543 clat percentiles (msec): 00:37:46.543 | 1.00th=[ 161], 5.00th=[ 199], 10.00th=[ 211], 20.00th=[ 224], 00:37:46.543 | 30.00th=[ 230], 40.00th=[ 234], 50.00th=[ 234], 60.00th=[ 236], 00:37:46.543 | 70.00th=[ 236], 80.00th=[ 236], 90.00th=[ 251], 95.00th=[ 334], 00:37:46.543 | 99.00th=[ 447], 99.50th=[ 447], 99.90th=[ 502], 99.95th=[ 502], 00:37:46.543 | 99.99th=[ 502] 00:37:46.543 bw ( KiB/s): min= 112, max= 368, per=4.20%, avg=264.80, stdev=52.29, samples=20 00:37:46.543 iops : min= 28, max= 92, avg=66.20, stdev=13.07, samples=20 00:37:46.543 lat (msec) : 250=89.38%, 500=10.32%, 750=0.29% 00:37:46.543 cpu : usr=98.95%, sys=0.67%, ctx=7, majf=0, minf=54 00:37:46.543 IO depths : 1=0.6%, 2=4.6%, 4=18.0%, 8=64.9%, 16=11.9%, 32=0.0%, >=64=0.0% 00:37:46.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.543 complete : 0=0.0%, 4=92.2%, 8=2.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.543 issued rwts: total=678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.543 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:46.543 filename2: (groupid=0, jobs=1): err= 0: pid=2649966: Sun Jul 14 10:46:29 2024 00:37:46.543 read: IOPS=47, BW=190KiB/s (195kB/s)(1920KiB/10101msec) 00:37:46.543 slat (nsec): min=6811, max=56089, avg=9336.17, stdev=3779.88 00:37:46.543 clat (msec): min=160, max=504, avg=336.59, stdev=69.06 00:37:46.543 lat (msec): min=160, max=504, avg=336.60, stdev=69.06 00:37:46.543 clat percentiles (msec): 00:37:46.543 | 1.00th=[ 161], 5.00th=[ 207], 10.00th=[ 222], 20.00th=[ 317], 00:37:46.543 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 355], 00:37:46.543 | 70.00th=[ 372], 80.00th=[ 372], 90.00th=[ 380], 95.00th=[ 472], 00:37:46.543 | 99.00th=[ 506], 99.50th=[ 506], 99.90th=[ 506], 99.95th=[ 506], 00:37:46.543 | 99.99th=[ 506] 00:37:46.543 bw ( KiB/s): min= 128, max= 256, per=3.10%, avg=195.37, stdev=60.94, samples=19 00:37:46.543 iops : min= 32, max= 64, avg=48.84, stdev=15.24, samples=19 00:37:46.543 lat (msec) : 250=15.00%, 500=81.67%, 750=3.33% 00:37:46.543 cpu : usr=98.99%, sys=0.63%, ctx=13, majf=0, minf=44 00:37:46.543 IO depths : 1=3.3%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.2%, 32=0.0%, >=64=0.0% 00:37:46.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.543 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.543 issued rwts: total=480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.543 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:46.543 filename2: (groupid=0, jobs=1): err= 0: pid=2649967: Sun Jul 14 10:46:29 2024 00:37:46.543 read: IOPS=66, BW=268KiB/s (274kB/s)(2712KiB/10129msec) 00:37:46.543 slat (nsec): min=6826, max=72288, avg=13191.03, stdev=9990.18 00:37:46.543 clat (msec): min=167, max=413, avg=238.66, stdev=31.01 00:37:46.543 lat (msec): min=167, max=413, avg=238.68, stdev=31.01 00:37:46.543 clat percentiles (msec): 00:37:46.543 | 1.00th=[ 167], 5.00th=[ 207], 10.00th=[ 226], 20.00th=[ 228], 00:37:46.543 | 30.00th=[ 230], 40.00th=[ 232], 50.00th=[ 234], 60.00th=[ 236], 00:37:46.543 | 70.00th=[ 236], 80.00th=[ 236], 90.00th=[ 239], 95.00th=[ 317], 00:37:46.543 | 99.00th=[ 351], 99.50th=[ 363], 99.90th=[ 414], 99.95th=[ 414], 00:37:46.543 | 99.99th=[ 414] 00:37:46.543 bw ( KiB/s): min= 128, max= 368, per=4.20%, avg=264.80, stdev=45.10, samples=20 00:37:46.543 iops : min= 32, max= 92, avg=66.20, stdev=11.27, samples=20 00:37:46.543 lat (msec) : 250=91.15%, 500=8.85% 00:37:46.543 cpu : usr=99.04%, sys=0.58%, ctx=13, majf=0, minf=43 00:37:46.543 IO depths : 1=0.6%, 2=3.2%, 4=14.2%, 8=70.1%, 16=11.9%, 32=0.0%, >=64=0.0% 00:37:46.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.543 complete : 0=0.0%, 4=91.1%, 8=3.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.543 issued rwts: total=678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.543 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:46.543 filename2: (groupid=0, jobs=1): err= 0: pid=2649968: Sun Jul 14 10:46:29 2024 00:37:46.543 read: IOPS=45, BW=184KiB/s (188kB/s)(1856KiB/10100msec) 00:37:46.543 slat (nsec): min=4586, max=22338, avg=8732.66, stdev=2803.80 00:37:46.543 clat (msec): min=160, max=505, avg=348.20, stdev=53.03 00:37:46.543 lat (msec): min=160, max=505, avg=348.20, stdev=53.03 00:37:46.543 clat percentiles (msec): 00:37:46.543 | 1.00th=[ 232], 5.00th=[ 259], 10.00th=[ 296], 20.00th=[ 321], 00:37:46.543 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 338], 60.00th=[ 368], 00:37:46.543 | 70.00th=[ 372], 80.00th=[ 372], 90.00th=[ 380], 95.00th=[ 468], 00:37:46.543 | 99.00th=[ 506], 99.50th=[ 506], 99.90th=[ 506], 99.95th=[ 506], 00:37:46.543 | 99.99th=[ 506] 00:37:46.543 bw ( KiB/s): min= 128, max= 256, per=2.99%, avg=188.63, stdev=62.56, samples=19 00:37:46.543 iops : min= 32, max= 64, avg=47.16, stdev=15.64, samples=19 00:37:46.543 lat (msec) : 250=4.31%, 500=92.24%, 750=3.45% 00:37:46.543 cpu : usr=99.05%, sys=0.56%, ctx=9, majf=0, minf=34 00:37:46.543 IO depths : 1=3.9%, 2=10.1%, 4=25.0%, 8=52.4%, 16=8.6%, 32=0.0%, >=64=0.0% 00:37:46.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.543 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.544 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.544 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:46.544 filename2: (groupid=0, jobs=1): err= 0: pid=2649969: Sun Jul 14 10:46:29 2024 00:37:46.544 read: IOPS=75, BW=302KiB/s (309kB/s)(3064KiB/10157msec) 00:37:46.544 slat (nsec): min=6752, max=49448, avg=17039.97, stdev=5178.15 00:37:46.544 clat (msec): min=2, max=318, avg=211.79, stdev=68.70 00:37:46.544 lat (msec): min=2, max=318, avg=211.80, stdev=68.70 00:37:46.544 clat percentiles (msec): 00:37:46.544 | 1.00th=[ 3], 5.00th=[ 10], 10.00th=[ 58], 20.00th=[ 226], 00:37:46.544 | 30.00th=[ 228], 40.00th=[ 232], 50.00th=[ 234], 60.00th=[ 234], 00:37:46.544 | 70.00th=[ 236], 80.00th=[ 236], 90.00th=[ 239], 95.00th=[ 279], 00:37:46.544 | 99.00th=[ 317], 99.50th=[ 317], 99.90th=[ 317], 99.95th=[ 317], 00:37:46.544 | 99.99th=[ 317] 00:37:46.544 bw ( KiB/s): min= 256, max= 768, per=4.77%, avg=300.00, stdev=117.33, samples=20 00:37:46.544 iops : min= 64, max= 192, avg=75.00, stdev=29.33, samples=20 00:37:46.544 lat (msec) : 4=3.92%, 10=2.35%, 100=4.18%, 250=82.77%, 500=6.79% 00:37:46.544 cpu : usr=98.75%, sys=0.85%, ctx=9, majf=0, minf=49 00:37:46.544 IO depths : 1=1.2%, 2=7.4%, 4=25.1%, 8=55.1%, 16=11.2%, 32=0.0%, >=64=0.0% 00:37:46.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.544 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.544 issued rwts: total=766,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.544 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:46.544 filename2: (groupid=0, jobs=1): err= 0: pid=2649970: Sun Jul 14 10:46:29 2024 00:37:46.544 read: IOPS=67, BW=271KiB/s (277kB/s)(2744KiB/10129msec) 00:37:46.544 slat (nsec): min=6714, max=66466, avg=17314.14, stdev=9072.31 00:37:46.544 clat (msec): min=160, max=345, avg=235.98, stdev=25.66 00:37:46.544 lat (msec): min=160, max=345, avg=236.00, stdev=25.66 00:37:46.544 clat percentiles (msec): 00:37:46.544 | 1.00th=[ 161], 5.00th=[ 207], 10.00th=[ 226], 20.00th=[ 228], 00:37:46.544 | 30.00th=[ 230], 40.00th=[ 234], 50.00th=[ 234], 60.00th=[ 236], 00:37:46.544 | 70.00th=[ 236], 80.00th=[ 236], 90.00th=[ 239], 95.00th=[ 313], 00:37:46.544 | 99.00th=[ 334], 99.50th=[ 334], 99.90th=[ 347], 99.95th=[ 347], 00:37:46.544 | 99.99th=[ 347] 00:37:46.544 bw ( KiB/s): min= 128, max= 384, per=4.25%, avg=268.00, stdev=53.92, samples=20 00:37:46.544 iops : min= 32, max= 96, avg=67.00, stdev=13.48, samples=20 00:37:46.544 lat (msec) : 250=90.67%, 500=9.33% 00:37:46.544 cpu : usr=98.98%, sys=0.63%, ctx=13, majf=0, minf=34 00:37:46.544 IO depths : 1=2.5%, 2=8.7%, 4=25.1%, 8=53.8%, 16=9.9%, 32=0.0%, >=64=0.0% 00:37:46.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.544 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.544 issued rwts: total=686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.544 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:46.544 00:37:46.544 Run status group 0 (all jobs): 00:37:46.544 READ: bw=6284KiB/s (6435kB/s), 184KiB/s-302KiB/s (188kB/s-309kB/s), io=62.3MiB (65.4MB), run=10078-10157msec 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.544 bdev_null0 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.544 [2024-07-14 10:46:30.190473] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.544 bdev_null1 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:46.544 10:46:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:46.544 { 00:37:46.544 "params": { 00:37:46.544 "name": "Nvme$subsystem", 00:37:46.545 "trtype": "$TEST_TRANSPORT", 00:37:46.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:46.545 "adrfam": "ipv4", 00:37:46.545 "trsvcid": "$NVMF_PORT", 00:37:46.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:46.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:46.545 "hdgst": ${hdgst:-false}, 00:37:46.545 "ddgst": ${ddgst:-false} 00:37:46.545 }, 00:37:46.545 "method": "bdev_nvme_attach_controller" 00:37:46.545 } 00:37:46.545 EOF 00:37:46.545 )") 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:46.545 { 00:37:46.545 "params": { 00:37:46.545 "name": "Nvme$subsystem", 00:37:46.545 "trtype": "$TEST_TRANSPORT", 00:37:46.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:46.545 "adrfam": "ipv4", 00:37:46.545 "trsvcid": "$NVMF_PORT", 00:37:46.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:46.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:46.545 "hdgst": ${hdgst:-false}, 00:37:46.545 "ddgst": ${ddgst:-false} 00:37:46.545 }, 00:37:46.545 "method": "bdev_nvme_attach_controller" 00:37:46.545 } 00:37:46.545 EOF 00:37:46.545 )") 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:46.545 "params": { 00:37:46.545 "name": "Nvme0", 00:37:46.545 "trtype": "tcp", 00:37:46.545 "traddr": "10.0.0.2", 00:37:46.545 "adrfam": "ipv4", 00:37:46.545 "trsvcid": "4420", 00:37:46.545 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:46.545 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:46.545 "hdgst": false, 00:37:46.545 "ddgst": false 00:37:46.545 }, 00:37:46.545 "method": "bdev_nvme_attach_controller" 00:37:46.545 },{ 00:37:46.545 "params": { 00:37:46.545 "name": "Nvme1", 00:37:46.545 "trtype": "tcp", 00:37:46.545 "traddr": "10.0.0.2", 00:37:46.545 "adrfam": "ipv4", 00:37:46.545 "trsvcid": "4420", 00:37:46.545 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:46.545 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:46.545 "hdgst": false, 00:37:46.545 "ddgst": false 00:37:46.545 }, 00:37:46.545 "method": "bdev_nvme_attach_controller" 00:37:46.545 }' 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:46.545 10:46:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:46.545 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:46.545 ... 00:37:46.545 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:46.545 ... 00:37:46.545 fio-3.35 00:37:46.545 Starting 4 threads 00:37:46.545 EAL: No free 2048 kB hugepages reported on node 1 00:37:51.814 00:37:51.814 filename0: (groupid=0, jobs=1): err= 0: pid=2651922: Sun Jul 14 10:46:36 2024 00:37:51.814 read: IOPS=2625, BW=20.5MiB/s (21.5MB/s)(103MiB/5002msec) 00:37:51.814 slat (nsec): min=6199, max=64560, avg=15402.59, stdev=10686.72 00:37:51.814 clat (usec): min=1047, max=5547, avg=3002.20, stdev=463.22 00:37:51.814 lat (usec): min=1059, max=5560, avg=3017.60, stdev=463.98 00:37:51.814 clat percentiles (usec): 00:37:51.814 | 1.00th=[ 1926], 5.00th=[ 2311], 10.00th=[ 2474], 20.00th=[ 2671], 00:37:51.814 | 30.00th=[ 2802], 40.00th=[ 2900], 50.00th=[ 2966], 60.00th=[ 3064], 00:37:51.814 | 70.00th=[ 3130], 80.00th=[ 3294], 90.00th=[ 3556], 95.00th=[ 3785], 00:37:51.814 | 99.00th=[ 4490], 99.50th=[ 4752], 99.90th=[ 5276], 99.95th=[ 5342], 00:37:51.814 | 99.99th=[ 5538] 00:37:51.814 bw ( KiB/s): min=20176, max=23984, per=24.73%, avg=21001.00, stdev=1151.51, samples=10 00:37:51.814 iops : min= 2522, max= 2998, avg=2625.10, stdev=143.94, samples=10 00:37:51.814 lat (msec) : 2=1.48%, 4=95.26%, 10=3.26% 00:37:51.814 cpu : usr=97.24%, sys=2.34%, ctx=51, majf=0, minf=9 00:37:51.814 IO depths : 1=0.6%, 2=3.8%, 4=67.5%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:51.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.814 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.814 issued rwts: total=13131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:51.814 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:51.814 filename0: (groupid=0, jobs=1): err= 0: pid=2651923: Sun Jul 14 10:46:36 2024 00:37:51.814 read: IOPS=2553, BW=19.9MiB/s (20.9MB/s)(99.8MiB/5001msec) 00:37:51.814 slat (nsec): min=6141, max=69527, avg=12029.96, stdev=7527.78 00:37:51.814 clat (usec): min=563, max=5433, avg=3097.60, stdev=495.74 00:37:51.814 lat (usec): min=575, max=5446, avg=3109.63, stdev=495.13 00:37:51.814 clat percentiles (usec): 00:37:51.814 | 1.00th=[ 1991], 5.00th=[ 2442], 10.00th=[ 2638], 20.00th=[ 2802], 00:37:51.814 | 30.00th=[ 2900], 40.00th=[ 2966], 50.00th=[ 3032], 60.00th=[ 3097], 00:37:51.814 | 70.00th=[ 3228], 80.00th=[ 3359], 90.00th=[ 3654], 95.00th=[ 4080], 00:37:51.814 | 99.00th=[ 4817], 99.50th=[ 5080], 99.90th=[ 5211], 99.95th=[ 5342], 00:37:51.814 | 99.99th=[ 5407] 00:37:51.814 bw ( KiB/s): min=19744, max=21072, per=24.08%, avg=20451.56, stdev=465.16, samples=9 00:37:51.814 iops : min= 2468, max= 2634, avg=2556.44, stdev=58.14, samples=9 00:37:51.814 lat (usec) : 750=0.03%, 1000=0.06% 00:37:51.814 lat (msec) : 2=0.94%, 4=93.55%, 10=5.41% 00:37:51.814 cpu : usr=97.46%, sys=2.18%, ctx=6, majf=0, minf=9 00:37:51.814 IO depths : 1=0.1%, 2=3.7%, 4=68.7%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:51.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.814 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.814 issued rwts: total=12769,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:51.814 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:51.814 filename1: (groupid=0, jobs=1): err= 0: pid=2651924: Sun Jul 14 10:46:36 2024 00:37:51.814 read: IOPS=2566, BW=20.0MiB/s (21.0MB/s)(100MiB/5001msec) 00:37:51.814 slat (nsec): min=6161, max=69593, avg=11778.05, stdev=7107.70 00:37:51.814 clat (usec): min=1004, max=5865, avg=3083.20, stdev=509.08 00:37:51.814 lat (usec): min=1010, max=5872, avg=3094.98, stdev=508.54 00:37:51.814 clat percentiles (usec): 00:37:51.814 | 1.00th=[ 2024], 5.00th=[ 2409], 10.00th=[ 2573], 20.00th=[ 2769], 00:37:51.814 | 30.00th=[ 2868], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3064], 00:37:51.814 | 70.00th=[ 3195], 80.00th=[ 3326], 90.00th=[ 3654], 95.00th=[ 4113], 00:37:51.814 | 99.00th=[ 4948], 99.50th=[ 5145], 99.90th=[ 5407], 99.95th=[ 5604], 00:37:51.814 | 99.99th=[ 5866] 00:37:51.814 bw ( KiB/s): min=19728, max=21216, per=24.22%, avg=20570.67, stdev=511.37, samples=9 00:37:51.814 iops : min= 2466, max= 2652, avg=2571.33, stdev=63.92, samples=9 00:37:51.814 lat (msec) : 2=0.94%, 4=93.44%, 10=5.62% 00:37:51.815 cpu : usr=97.44%, sys=2.22%, ctx=7, majf=0, minf=9 00:37:51.815 IO depths : 1=0.1%, 2=4.1%, 4=67.6%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:51.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.815 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.815 issued rwts: total=12833,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:51.815 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:51.815 filename1: (groupid=0, jobs=1): err= 0: pid=2651925: Sun Jul 14 10:46:36 2024 00:37:51.815 read: IOPS=2870, BW=22.4MiB/s (23.5MB/s)(112MiB/5002msec) 00:37:51.815 slat (usec): min=6, max=222, avg=11.58, stdev= 6.96 00:37:51.815 clat (usec): min=895, max=5276, avg=2752.36, stdev=452.58 00:37:51.815 lat (usec): min=907, max=5289, avg=2763.95, stdev=453.04 00:37:51.815 clat percentiles (usec): 00:37:51.815 | 1.00th=[ 1745], 5.00th=[ 2114], 10.00th=[ 2212], 20.00th=[ 2376], 00:37:51.815 | 30.00th=[ 2507], 40.00th=[ 2638], 50.00th=[ 2769], 60.00th=[ 2868], 00:37:51.815 | 70.00th=[ 2966], 80.00th=[ 3064], 90.00th=[ 3228], 95.00th=[ 3425], 00:37:51.815 | 99.00th=[ 4228], 99.50th=[ 4555], 99.90th=[ 5014], 99.95th=[ 5080], 00:37:51.815 | 99.99th=[ 5276] 00:37:51.815 bw ( KiB/s): min=21360, max=24944, per=27.05%, avg=22966.40, stdev=1047.73, samples=10 00:37:51.815 iops : min= 2670, max= 3118, avg=2870.80, stdev=130.97, samples=10 00:37:51.815 lat (usec) : 1000=0.01% 00:37:51.815 lat (msec) : 2=3.31%, 4=95.25%, 10=1.43% 00:37:51.815 cpu : usr=97.18%, sys=2.44%, ctx=19, majf=0, minf=9 00:37:51.815 IO depths : 1=0.3%, 2=8.6%, 4=61.6%, 8=29.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:51.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.815 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.815 issued rwts: total=14359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:51.815 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:51.815 00:37:51.815 Run status group 0 (all jobs): 00:37:51.815 READ: bw=82.9MiB/s (87.0MB/s), 19.9MiB/s-22.4MiB/s (20.9MB/s-23.5MB/s), io=415MiB (435MB), run=5001-5002msec 00:37:51.815 10:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:51.815 10:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:51.815 10:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:51.815 10:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:51.815 10:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:51.815 10:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:51.815 10:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:51.815 10:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:51.815 10:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:51.815 10:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:51.815 10:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:51.815 10:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:51.815 10:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:51.815 10:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:51.815 10:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:51.815 10:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:51.815 10:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:51.815 10:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:51.815 10:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:51.815 10:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:51.815 10:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:51.815 10:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:51.815 10:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:51.815 10:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:51.815 00:37:51.815 real 0m24.280s 00:37:51.815 user 4m55.742s 00:37:51.815 sys 0m3.716s 00:37:51.815 10:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:51.815 10:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:51.815 ************************************ 00:37:51.815 END TEST fio_dif_rand_params 00:37:51.815 ************************************ 00:37:51.815 10:46:36 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:37:51.815 10:46:36 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:51.815 10:46:36 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:51.815 10:46:36 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:51.815 10:46:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:51.815 ************************************ 00:37:51.815 START TEST fio_dif_digest 00:37:51.815 ************************************ 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:51.815 bdev_null0 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:51.815 [2024-07-14 10:46:36.631128] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:51.815 { 00:37:51.815 "params": { 00:37:51.815 "name": "Nvme$subsystem", 00:37:51.815 "trtype": "$TEST_TRANSPORT", 00:37:51.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:51.815 "adrfam": "ipv4", 00:37:51.815 "trsvcid": "$NVMF_PORT", 00:37:51.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:51.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:51.815 "hdgst": ${hdgst:-false}, 00:37:51.815 "ddgst": ${ddgst:-false} 00:37:51.815 }, 00:37:51.815 "method": "bdev_nvme_attach_controller" 00:37:51.815 } 00:37:51.815 EOF 00:37:51.815 )") 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:37:51.815 10:46:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:51.815 "params": { 00:37:51.816 "name": "Nvme0", 00:37:51.816 "trtype": "tcp", 00:37:51.816 "traddr": "10.0.0.2", 00:37:51.816 "adrfam": "ipv4", 00:37:51.816 "trsvcid": "4420", 00:37:51.816 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:51.816 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:51.816 "hdgst": true, 00:37:51.816 "ddgst": true 00:37:51.816 }, 00:37:51.816 "method": "bdev_nvme_attach_controller" 00:37:51.816 }' 00:37:51.816 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:51.816 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:51.816 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:51.816 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:51.816 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:51.816 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:51.816 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:51.816 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:51.816 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:51.816 10:46:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:52.074 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:52.074 ... 00:37:52.074 fio-3.35 00:37:52.074 Starting 3 threads 00:37:52.074 EAL: No free 2048 kB hugepages reported on node 1 00:38:04.314 00:38:04.314 filename0: (groupid=0, jobs=1): err= 0: pid=2653084: Sun Jul 14 10:46:47 2024 00:38:04.314 read: IOPS=281, BW=35.2MiB/s (36.9MB/s)(354MiB/10049msec) 00:38:04.314 slat (nsec): min=6744, max=48304, avg=19078.48, stdev=7972.96 00:38:04.314 clat (usec): min=7981, max=51952, avg=10608.59, stdev=1278.99 00:38:04.314 lat (usec): min=8016, max=51967, avg=10627.66, stdev=1278.80 00:38:04.314 clat percentiles (usec): 00:38:04.314 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10028], 00:38:04.314 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10552], 60.00th=[10683], 00:38:04.314 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11469], 95.00th=[11731], 00:38:04.314 | 99.00th=[12256], 99.50th=[12518], 99.90th=[13304], 99.95th=[49546], 00:38:04.314 | 99.99th=[52167] 00:38:04.314 bw ( KiB/s): min=35072, max=37888, per=33.71%, avg=36211.20, stdev=711.95, samples=20 00:38:04.314 iops : min= 274, max= 296, avg=282.90, stdev= 5.56, samples=20 00:38:04.314 lat (msec) : 10=19.39%, 20=80.54%, 50=0.04%, 100=0.04% 00:38:04.314 cpu : usr=93.54%, sys=4.57%, ctx=874, majf=0, minf=170 00:38:04.314 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:04.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:04.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:04.314 issued rwts: total=2832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:04.314 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:04.314 filename0: (groupid=0, jobs=1): err= 0: pid=2653085: Sun Jul 14 10:46:47 2024 00:38:04.314 read: IOPS=284, BW=35.5MiB/s (37.3MB/s)(357MiB/10043msec) 00:38:04.314 slat (nsec): min=6465, max=69971, avg=17093.33, stdev=6495.33 00:38:04.314 clat (usec): min=7914, max=48384, avg=10516.06, stdev=1215.99 00:38:04.314 lat (usec): min=7939, max=48409, avg=10533.16, stdev=1216.30 00:38:04.314 clat percentiles (usec): 00:38:04.314 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9503], 20.00th=[ 9896], 00:38:04.314 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:38:04.314 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11469], 95.00th=[11731], 00:38:04.314 | 99.00th=[12387], 99.50th=[12518], 99.90th=[13304], 99.95th=[46400], 00:38:04.314 | 99.99th=[48497] 00:38:04.314 bw ( KiB/s): min=35072, max=37632, per=34.01%, avg=36531.20, stdev=680.36, samples=20 00:38:04.314 iops : min= 274, max= 294, avg=285.40, stdev= 5.32, samples=20 00:38:04.314 lat (msec) : 10=24.68%, 20=75.25%, 50=0.07% 00:38:04.314 cpu : usr=96.30%, sys=3.08%, ctx=387, majf=0, minf=92 00:38:04.314 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:04.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:04.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:04.314 issued rwts: total=2856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:04.314 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:04.314 filename0: (groupid=0, jobs=1): err= 0: pid=2653086: Sun Jul 14 10:46:47 2024 00:38:04.314 read: IOPS=273, BW=34.2MiB/s (35.8MB/s)(343MiB/10044msec) 00:38:04.314 slat (nsec): min=6506, max=44572, avg=19354.72, stdev=7196.31 00:38:04.314 clat (usec): min=8301, max=48864, avg=10940.61, stdev=1251.38 00:38:04.314 lat (usec): min=8329, max=48886, avg=10959.97, stdev=1251.63 00:38:04.314 clat percentiles (usec): 00:38:04.314 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10290], 00:38:04.314 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 00:38:04.314 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12256], 00:38:04.314 | 99.00th=[12911], 99.50th=[13042], 99.90th=[14484], 99.95th=[46924], 00:38:04.314 | 99.99th=[49021] 00:38:04.314 bw ( KiB/s): min=33792, max=35840, per=32.69%, avg=35110.40, stdev=500.24, samples=20 00:38:04.314 iops : min= 264, max= 280, avg=274.30, stdev= 3.91, samples=20 00:38:04.314 lat (msec) : 10=10.53%, 20=89.40%, 50=0.07% 00:38:04.314 cpu : usr=95.96%, sys=3.72%, ctx=28, majf=0, minf=165 00:38:04.314 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:04.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:04.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:04.314 issued rwts: total=2745,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:04.314 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:04.314 00:38:04.314 Run status group 0 (all jobs): 00:38:04.314 READ: bw=105MiB/s (110MB/s), 34.2MiB/s-35.5MiB/s (35.8MB/s-37.3MB/s), io=1054MiB (1105MB), run=10043-10049msec 00:38:04.314 10:46:47 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:38:04.314 10:46:47 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:38:04.314 10:46:47 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:38:04.314 10:46:47 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:04.314 10:46:47 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:38:04.314 10:46:47 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:04.314 10:46:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:04.314 10:46:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:04.314 10:46:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:04.314 10:46:47 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:04.314 10:46:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:04.314 10:46:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:04.314 10:46:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:04.314 00:38:04.314 real 0m11.205s 00:38:04.314 user 0m35.627s 00:38:04.314 sys 0m1.471s 00:38:04.314 10:46:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:04.314 10:46:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:04.314 ************************************ 00:38:04.314 END TEST fio_dif_digest 00:38:04.314 ************************************ 00:38:04.314 10:46:47 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:38:04.314 10:46:47 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:38:04.314 10:46:47 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:38:04.314 10:46:47 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:04.314 10:46:47 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:38:04.314 10:46:47 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:04.314 10:46:47 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:38:04.314 10:46:47 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:04.314 10:46:47 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:04.314 rmmod nvme_tcp 00:38:04.314 rmmod nvme_fabrics 00:38:04.314 rmmod nvme_keyring 00:38:04.314 10:46:47 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:04.314 10:46:47 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:38:04.314 10:46:47 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:38:04.314 10:46:47 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2644593 ']' 00:38:04.314 10:46:47 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2644593 00:38:04.314 10:46:47 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 2644593 ']' 00:38:04.314 10:46:47 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 2644593 00:38:04.314 10:46:47 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:38:04.314 10:46:47 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:04.314 10:46:47 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2644593 00:38:04.314 10:46:47 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:04.314 10:46:47 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:04.314 10:46:47 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2644593' 00:38:04.314 killing process with pid 2644593 00:38:04.314 10:46:47 nvmf_dif -- common/autotest_common.sh@967 -- # kill 2644593 00:38:04.314 10:46:47 nvmf_dif -- common/autotest_common.sh@972 -- # wait 2644593 00:38:04.314 10:46:48 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:38:04.314 10:46:48 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:06.221 Waiting for block devices as requested 00:38:06.221 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:38:06.221 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:06.221 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:06.221 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:06.221 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:06.479 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:06.479 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:06.479 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:06.479 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:06.738 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:06.738 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:06.738 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:06.996 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:06.996 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:06.996 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:06.996 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:07.255 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:07.255 10:46:52 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:07.255 10:46:52 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:07.255 10:46:52 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:07.255 10:46:52 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:07.255 10:46:52 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:07.255 10:46:52 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:07.255 10:46:52 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:09.789 10:46:54 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:09.789 00:38:09.789 real 1m13.995s 00:38:09.789 user 7m13.929s 00:38:09.789 sys 0m17.830s 00:38:09.789 10:46:54 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:09.789 10:46:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:09.789 ************************************ 00:38:09.789 END TEST nvmf_dif 00:38:09.789 ************************************ 00:38:09.789 10:46:54 -- common/autotest_common.sh@1142 -- # return 0 00:38:09.789 10:46:54 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:09.789 10:46:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:09.789 10:46:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:09.789 10:46:54 -- common/autotest_common.sh@10 -- # set +x 00:38:09.789 ************************************ 00:38:09.789 START TEST nvmf_abort_qd_sizes 00:38:09.789 ************************************ 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:09.789 * Looking for test storage... 00:38:09.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:38:09.789 10:46:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:38:15.063 Found 0000:86:00.0 (0x8086 - 0x159b) 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:38:15.063 Found 0000:86:00.1 (0x8086 - 0x159b) 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:38:15.063 Found net devices under 0000:86:00.0: cvl_0_0 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:38:15.063 Found net devices under 0000:86:00.1: cvl_0_1 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:15.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:15.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:38:15.063 00:38:15.063 --- 10.0.0.2 ping statistics --- 00:38:15.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:15.063 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:38:15.063 10:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:15.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:15.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:38:15.063 00:38:15.063 --- 10.0.0.1 ping statistics --- 00:38:15.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:15.063 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:38:15.063 10:47:00 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:15.063 10:47:00 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:38:15.063 10:47:00 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:38:15.064 10:47:00 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:18.352 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:18.352 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:18.352 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:18.352 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:18.352 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:18.352 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:18.352 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:18.352 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:18.352 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:18.352 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:18.352 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:18.352 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:18.352 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:18.352 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:18.352 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:18.352 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:18.918 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:38:18.918 10:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:18.918 10:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:18.918 10:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:18.918 10:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:18.918 10:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:18.918 10:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:18.918 10:47:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:38:18.918 10:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:18.918 10:47:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:18.918 10:47:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:18.918 10:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2660908 00:38:18.918 10:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2660908 00:38:18.918 10:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:38:18.918 10:47:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 2660908 ']' 00:38:18.918 10:47:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:18.918 10:47:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:18.918 10:47:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:18.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:18.918 10:47:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:18.918 10:47:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:18.918 [2024-07-14 10:47:03.882161] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:38:18.918 [2024-07-14 10:47:03.882203] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:19.176 EAL: No free 2048 kB hugepages reported on node 1 00:38:19.176 [2024-07-14 10:47:03.954810] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:19.176 [2024-07-14 10:47:03.998018] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:19.176 [2024-07-14 10:47:03.998057] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:19.176 [2024-07-14 10:47:03.998064] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:19.176 [2024-07-14 10:47:03.998071] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:19.176 [2024-07-14 10:47:03.998076] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:19.176 [2024-07-14 10:47:03.998135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:19.176 [2024-07-14 10:47:03.998259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:19.176 [2024-07-14 10:47:03.998314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:19.176 [2024-07-14 10:47:03.998314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:38:19.744 10:47:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:19.744 10:47:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:38:19.744 10:47:04 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:19.744 10:47:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:19.744 10:47:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:19.744 10:47:04 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:19.744 10:47:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:38:19.744 10:47:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:38:20.004 10:47:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:38:20.004 10:47:04 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:38:20.004 10:47:04 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:38:20.004 10:47:04 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:38:20.004 10:47:04 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:38:20.004 10:47:04 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:38:20.004 10:47:04 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:38:20.004 10:47:04 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:38:20.004 10:47:04 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:38:20.004 10:47:04 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:38:20.004 10:47:04 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:38:20.004 10:47:04 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:38:20.004 10:47:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:38:20.004 10:47:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:38:20.004 10:47:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:38:20.004 10:47:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:20.004 10:47:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:20.004 10:47:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:20.004 ************************************ 00:38:20.004 START TEST spdk_target_abort 00:38:20.004 ************************************ 00:38:20.004 10:47:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:38:20.004 10:47:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:38:20.004 10:47:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:38:20.004 10:47:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:20.004 10:47:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:23.295 spdk_targetn1 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:23.295 [2024-07-14 10:47:07.596258] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:23.295 [2024-07-14 10:47:07.625270] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:23.295 10:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:23.295 EAL: No free 2048 kB hugepages reported on node 1 00:38:25.834 Initializing NVMe Controllers 00:38:25.834 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:25.834 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:25.834 Initialization complete. Launching workers. 00:38:25.834 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16122, failed: 0 00:38:25.834 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1408, failed to submit 14714 00:38:25.834 success 736, unsuccess 672, failed 0 00:38:25.834 10:47:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:25.834 10:47:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:25.834 EAL: No free 2048 kB hugepages reported on node 1 00:38:29.127 Initializing NVMe Controllers 00:38:29.127 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:29.127 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:29.127 Initialization complete. Launching workers. 00:38:29.127 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8687, failed: 0 00:38:29.127 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1242, failed to submit 7445 00:38:29.127 success 325, unsuccess 917, failed 0 00:38:29.127 10:47:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:29.127 10:47:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:29.127 EAL: No free 2048 kB hugepages reported on node 1 00:38:32.414 Initializing NVMe Controllers 00:38:32.414 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:32.415 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:32.415 Initialization complete. Launching workers. 00:38:32.415 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38167, failed: 0 00:38:32.415 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2794, failed to submit 35373 00:38:32.415 success 589, unsuccess 2205, failed 0 00:38:32.415 10:47:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:32.415 10:47:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:32.415 10:47:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:32.415 10:47:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:32.415 10:47:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:32.415 10:47:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:32.415 10:47:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:33.795 10:47:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:33.795 10:47:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2660908 00:38:33.795 10:47:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 2660908 ']' 00:38:33.795 10:47:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 2660908 00:38:33.795 10:47:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:38:33.795 10:47:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:33.795 10:47:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2660908 00:38:33.795 10:47:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:33.795 10:47:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:33.795 10:47:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2660908' 00:38:33.795 killing process with pid 2660908 00:38:33.795 10:47:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 2660908 00:38:33.795 10:47:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 2660908 00:38:34.055 00:38:34.055 real 0m14.071s 00:38:34.055 user 0m56.235s 00:38:34.055 sys 0m2.235s 00:38:34.055 10:47:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:34.055 10:47:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:34.055 ************************************ 00:38:34.055 END TEST spdk_target_abort 00:38:34.055 ************************************ 00:38:34.055 10:47:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:38:34.055 10:47:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:34.055 10:47:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:34.055 10:47:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:34.055 10:47:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:34.055 ************************************ 00:38:34.055 START TEST kernel_target_abort 00:38:34.055 ************************************ 00:38:34.055 10:47:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:38:34.055 10:47:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:34.055 10:47:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:38:34.055 10:47:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:34.055 10:47:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:34.055 10:47:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:34.055 10:47:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:34.055 10:47:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:34.055 10:47:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:34.055 10:47:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:34.055 10:47:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:34.055 10:47:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:34.055 10:47:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:34.055 10:47:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:34.055 10:47:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:38:34.055 10:47:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:34.055 10:47:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:34.055 10:47:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:34.055 10:47:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:38:34.055 10:47:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:38:34.055 10:47:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:38:34.055 10:47:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:34.055 10:47:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:36.590 Waiting for block devices as requested 00:38:36.849 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:38:36.849 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:36.849 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:37.107 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:37.107 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:37.107 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:37.366 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:37.366 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:37.366 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:37.366 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:37.624 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:37.624 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:37.624 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:37.624 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:37.883 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:37.883 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:37.883 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:38.142 10:47:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:38:38.142 10:47:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:38.142 10:47:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:38:38.142 10:47:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:38:38.142 10:47:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:38.142 10:47:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:38:38.142 10:47:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:38:38.142 10:47:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:38:38.142 10:47:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:38.142 No valid GPT data, bailing 00:38:38.142 10:47:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:38.142 10:47:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:38:38.142 10:47:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:38:38.142 10:47:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:38:38.142 10:47:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:38:38.142 10:47:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:38.142 10:47:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:38.142 10:47:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:38.142 10:47:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:38.142 10:47:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:38:38.142 10:47:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:38:38.142 10:47:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:38:38.142 10:47:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:38:38.142 10:47:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:38:38.142 10:47:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:38:38.142 10:47:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:38:38.143 10:47:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:38.143 10:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:38:38.143 00:38:38.143 Discovery Log Number of Records 2, Generation counter 2 00:38:38.143 =====Discovery Log Entry 0====== 00:38:38.143 trtype: tcp 00:38:38.143 adrfam: ipv4 00:38:38.143 subtype: current discovery subsystem 00:38:38.143 treq: not specified, sq flow control disable supported 00:38:38.143 portid: 1 00:38:38.143 trsvcid: 4420 00:38:38.143 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:38.143 traddr: 10.0.0.1 00:38:38.143 eflags: none 00:38:38.143 sectype: none 00:38:38.143 =====Discovery Log Entry 1====== 00:38:38.143 trtype: tcp 00:38:38.143 adrfam: ipv4 00:38:38.143 subtype: nvme subsystem 00:38:38.143 treq: not specified, sq flow control disable supported 00:38:38.143 portid: 1 00:38:38.143 trsvcid: 4420 00:38:38.143 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:38.143 traddr: 10.0.0.1 00:38:38.143 eflags: none 00:38:38.143 sectype: none 00:38:38.143 10:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:38.143 10:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:38.143 10:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:38.143 10:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:38.143 10:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:38.143 10:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:38.143 10:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:38.143 10:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:38.143 10:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:38.143 10:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:38.143 10:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:38.143 10:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:38.143 10:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:38.143 10:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:38.143 10:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:38.143 10:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:38.143 10:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:38.143 10:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:38.143 10:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:38.143 10:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:38.143 10:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:38.143 EAL: No free 2048 kB hugepages reported on node 1 00:38:41.431 Initializing NVMe Controllers 00:38:41.431 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:41.431 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:41.431 Initialization complete. Launching workers. 00:38:41.431 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 90308, failed: 0 00:38:41.431 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 90308, failed to submit 0 00:38:41.431 success 0, unsuccess 90308, failed 0 00:38:41.431 10:47:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:41.431 10:47:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:41.431 EAL: No free 2048 kB hugepages reported on node 1 00:38:44.721 Initializing NVMe Controllers 00:38:44.721 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:44.721 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:44.721 Initialization complete. Launching workers. 00:38:44.721 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145825, failed: 0 00:38:44.721 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36626, failed to submit 109199 00:38:44.721 success 0, unsuccess 36626, failed 0 00:38:44.721 10:47:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:44.721 10:47:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:44.721 EAL: No free 2048 kB hugepages reported on node 1 00:38:48.008 Initializing NVMe Controllers 00:38:48.008 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:48.008 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:48.008 Initialization complete. Launching workers. 00:38:48.008 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 138111, failed: 0 00:38:48.008 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34586, failed to submit 103525 00:38:48.008 success 0, unsuccess 34586, failed 0 00:38:48.008 10:47:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:48.008 10:47:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:48.008 10:47:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:38:48.008 10:47:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:48.008 10:47:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:48.008 10:47:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:48.008 10:47:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:48.008 10:47:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:38:48.008 10:47:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:38:48.008 10:47:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:50.581 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:50.581 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:50.581 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:50.581 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:50.581 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:50.581 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:50.581 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:50.581 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:50.581 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:50.581 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:50.581 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:50.581 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:50.581 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:50.581 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:50.581 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:50.581 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:51.149 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:38:51.407 00:38:51.407 real 0m17.312s 00:38:51.407 user 0m8.848s 00:38:51.407 sys 0m4.976s 00:38:51.407 10:47:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:51.407 10:47:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:51.407 ************************************ 00:38:51.407 END TEST kernel_target_abort 00:38:51.407 ************************************ 00:38:51.407 10:47:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:38:51.407 10:47:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:51.407 10:47:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:51.407 10:47:36 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:51.407 10:47:36 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:38:51.407 10:47:36 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:51.407 10:47:36 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:38:51.407 10:47:36 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:51.407 10:47:36 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:51.407 rmmod nvme_tcp 00:38:51.407 rmmod nvme_fabrics 00:38:51.407 rmmod nvme_keyring 00:38:51.407 10:47:36 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:51.407 10:47:36 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:38:51.407 10:47:36 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:38:51.407 10:47:36 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2660908 ']' 00:38:51.407 10:47:36 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2660908 00:38:51.407 10:47:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 2660908 ']' 00:38:51.407 10:47:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 2660908 00:38:51.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2660908) - No such process 00:38:51.407 10:47:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 2660908 is not found' 00:38:51.407 Process with pid 2660908 is not found 00:38:51.407 10:47:36 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:38:51.407 10:47:36 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:53.936 Waiting for block devices as requested 00:38:54.194 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:38:54.194 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:54.194 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:54.450 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:54.450 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:54.450 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:54.707 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:54.707 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:54.707 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:54.966 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:54.966 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:54.966 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:54.966 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:55.225 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:55.225 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:55.225 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:55.484 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:55.484 10:47:40 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:55.484 10:47:40 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:55.484 10:47:40 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:55.484 10:47:40 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:55.484 10:47:40 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:55.484 10:47:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:55.484 10:47:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:58.014 10:47:42 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:58.014 00:38:58.014 real 0m48.125s 00:38:58.014 user 1m9.349s 00:38:58.014 sys 0m15.641s 00:38:58.014 10:47:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:58.014 10:47:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:58.014 ************************************ 00:38:58.014 END TEST nvmf_abort_qd_sizes 00:38:58.014 ************************************ 00:38:58.014 10:47:42 -- common/autotest_common.sh@1142 -- # return 0 00:38:58.014 10:47:42 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:58.014 10:47:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:58.014 10:47:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:58.014 10:47:42 -- common/autotest_common.sh@10 -- # set +x 00:38:58.014 ************************************ 00:38:58.014 START TEST keyring_file 00:38:58.014 ************************************ 00:38:58.014 10:47:42 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:58.014 * Looking for test storage... 00:38:58.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:58.014 10:47:42 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:58.014 10:47:42 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:58.014 10:47:42 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:58.014 10:47:42 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:58.014 10:47:42 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:58.014 10:47:42 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.014 10:47:42 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.014 10:47:42 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.014 10:47:42 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:58.014 10:47:42 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@47 -- # : 0 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:58.014 10:47:42 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:58.014 10:47:42 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:58.014 10:47:42 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:58.014 10:47:42 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:58.014 10:47:42 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:58.014 10:47:42 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:58.014 10:47:42 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:58.014 10:47:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:58.014 10:47:42 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:58.014 10:47:42 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:58.014 10:47:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:58.014 10:47:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:58.014 10:47:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.LxwQXZOssL 00:38:58.014 10:47:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@705 -- # python - 00:38:58.014 10:47:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.LxwQXZOssL 00:38:58.014 10:47:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.LxwQXZOssL 00:38:58.014 10:47:42 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.LxwQXZOssL 00:38:58.014 10:47:42 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:58.014 10:47:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:58.014 10:47:42 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:58.014 10:47:42 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:58.014 10:47:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:58.014 10:47:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:58.014 10:47:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.s1UZedJF6l 00:38:58.014 10:47:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:38:58.014 10:47:42 keyring_file -- nvmf/common.sh@705 -- # python - 00:38:58.014 10:47:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.s1UZedJF6l 00:38:58.014 10:47:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.s1UZedJF6l 00:38:58.014 10:47:42 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.s1UZedJF6l 00:38:58.014 10:47:42 keyring_file -- keyring/file.sh@30 -- # tgtpid=2669731 00:38:58.014 10:47:42 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2669731 00:38:58.014 10:47:42 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:58.014 10:47:42 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2669731 ']' 00:38:58.014 10:47:42 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:58.014 10:47:42 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:58.014 10:47:42 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:58.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:58.015 10:47:42 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:58.015 10:47:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:58.015 [2024-07-14 10:47:42.712616] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:38:58.015 [2024-07-14 10:47:42.712669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669731 ] 00:38:58.015 EAL: No free 2048 kB hugepages reported on node 1 00:38:58.015 [2024-07-14 10:47:42.780098] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:58.015 [2024-07-14 10:47:42.821162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:58.273 10:47:43 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:58.273 10:47:43 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:38:58.273 10:47:43 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:58.273 10:47:43 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:58.273 10:47:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:58.273 [2024-07-14 10:47:43.009921] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:58.273 null0 00:38:58.273 [2024-07-14 10:47:43.041967] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:58.273 [2024-07-14 10:47:43.042289] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:58.273 [2024-07-14 10:47:43.049983] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:38:58.273 10:47:43 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:58.273 10:47:43 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:58.273 10:47:43 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:38:58.273 10:47:43 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:58.273 10:47:43 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:38:58.273 10:47:43 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:58.273 10:47:43 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:38:58.273 10:47:43 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:58.273 10:47:43 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:58.273 10:47:43 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:58.273 10:47:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:58.273 [2024-07-14 10:47:43.062013] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:58.273 request: 00:38:58.273 { 00:38:58.273 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:58.273 "secure_channel": false, 00:38:58.273 "listen_address": { 00:38:58.273 "trtype": "tcp", 00:38:58.273 "traddr": "127.0.0.1", 00:38:58.273 "trsvcid": "4420" 00:38:58.273 }, 00:38:58.273 "method": "nvmf_subsystem_add_listener", 00:38:58.273 "req_id": 1 00:38:58.273 } 00:38:58.273 Got JSON-RPC error response 00:38:58.273 response: 00:38:58.273 { 00:38:58.273 "code": -32602, 00:38:58.273 "message": "Invalid parameters" 00:38:58.273 } 00:38:58.273 10:47:43 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:38:58.273 10:47:43 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:38:58.273 10:47:43 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:58.273 10:47:43 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:58.273 10:47:43 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:58.273 10:47:43 keyring_file -- keyring/file.sh@46 -- # bperfpid=2669735 00:38:58.273 10:47:43 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2669735 /var/tmp/bperf.sock 00:38:58.273 10:47:43 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:58.273 10:47:43 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2669735 ']' 00:38:58.273 10:47:43 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:58.273 10:47:43 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:58.273 10:47:43 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:58.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:58.273 10:47:43 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:58.273 10:47:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:58.273 [2024-07-14 10:47:43.115482] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:38:58.273 [2024-07-14 10:47:43.115524] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669735 ] 00:38:58.273 EAL: No free 2048 kB hugepages reported on node 1 00:38:58.273 [2024-07-14 10:47:43.164422] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:58.273 [2024-07-14 10:47:43.203762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:58.531 10:47:43 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:58.531 10:47:43 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:38:58.531 10:47:43 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LxwQXZOssL 00:38:58.531 10:47:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LxwQXZOssL 00:38:58.531 10:47:43 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.s1UZedJF6l 00:38:58.531 10:47:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.s1UZedJF6l 00:38:58.789 10:47:43 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:38:58.789 10:47:43 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:38:58.789 10:47:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:58.789 10:47:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:58.789 10:47:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:59.047 10:47:43 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.LxwQXZOssL == \/\t\m\p\/\t\m\p\.\L\x\w\Q\X\Z\O\s\s\L ]] 00:38:59.047 10:47:43 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:59.047 10:47:43 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:38:59.047 10:47:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:59.047 10:47:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:59.047 10:47:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:59.047 10:47:44 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.s1UZedJF6l == \/\t\m\p\/\t\m\p\.\s\1\U\Z\e\d\J\F\6\l ]] 00:38:59.047 10:47:44 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:38:59.047 10:47:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:59.047 10:47:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:59.047 10:47:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:59.047 10:47:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:59.047 10:47:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:59.306 10:47:44 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:38:59.306 10:47:44 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:38:59.306 10:47:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:59.306 10:47:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:59.306 10:47:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:59.306 10:47:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:59.306 10:47:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:59.565 10:47:44 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:59.565 10:47:44 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:59.565 10:47:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:59.565 [2024-07-14 10:47:44.522861] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:59.824 nvme0n1 00:38:59.824 10:47:44 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:38:59.824 10:47:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:59.824 10:47:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:59.824 10:47:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:59.824 10:47:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:59.824 10:47:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:59.824 10:47:44 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:38:59.824 10:47:44 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:38:59.824 10:47:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:59.824 10:47:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:59.824 10:47:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:59.824 10:47:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:59.825 10:47:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:00.084 10:47:44 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:39:00.084 10:47:44 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:00.084 Running I/O for 1 seconds... 00:39:01.459 00:39:01.459 Latency(us) 00:39:01.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:01.459 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:39:01.459 nvme0n1 : 1.00 17446.39 68.15 0.00 0.00 7318.66 2820.90 11055.64 00:39:01.459 =================================================================================================================== 00:39:01.459 Total : 17446.39 68.15 0.00 0.00 7318.66 2820.90 11055.64 00:39:01.459 0 00:39:01.459 10:47:46 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:01.459 10:47:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:01.459 10:47:46 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:39:01.459 10:47:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:01.459 10:47:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:01.459 10:47:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:01.459 10:47:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:01.459 10:47:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:01.717 10:47:46 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:39:01.717 10:47:46 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:39:01.717 10:47:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:01.717 10:47:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:01.717 10:47:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:01.717 10:47:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:01.717 10:47:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:01.717 10:47:46 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:39:01.717 10:47:46 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:01.717 10:47:46 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:39:01.717 10:47:46 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:01.717 10:47:46 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:39:01.717 10:47:46 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:01.717 10:47:46 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:39:01.717 10:47:46 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:01.717 10:47:46 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:01.717 10:47:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:01.974 [2024-07-14 10:47:46.793796] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:01.974 [2024-07-14 10:47:46.794008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14bdcd0 (107): Transport endpoint is not connected 00:39:01.974 [2024-07-14 10:47:46.795004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14bdcd0 (9): Bad file descriptor 00:39:01.974 [2024-07-14 10:47:46.796005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:01.974 [2024-07-14 10:47:46.796016] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:01.974 [2024-07-14 10:47:46.796023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:01.974 request: 00:39:01.974 { 00:39:01.974 "name": "nvme0", 00:39:01.974 "trtype": "tcp", 00:39:01.974 "traddr": "127.0.0.1", 00:39:01.974 "adrfam": "ipv4", 00:39:01.974 "trsvcid": "4420", 00:39:01.974 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:01.974 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:01.974 "prchk_reftag": false, 00:39:01.974 "prchk_guard": false, 00:39:01.974 "hdgst": false, 00:39:01.974 "ddgst": false, 00:39:01.974 "psk": "key1", 00:39:01.974 "method": "bdev_nvme_attach_controller", 00:39:01.974 "req_id": 1 00:39:01.974 } 00:39:01.974 Got JSON-RPC error response 00:39:01.974 response: 00:39:01.974 { 00:39:01.974 "code": -5, 00:39:01.974 "message": "Input/output error" 00:39:01.974 } 00:39:01.974 10:47:46 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:39:01.974 10:47:46 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:01.974 10:47:46 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:01.974 10:47:46 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:01.974 10:47:46 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:39:01.974 10:47:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:01.974 10:47:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:01.974 10:47:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:01.974 10:47:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:01.974 10:47:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:02.232 10:47:46 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:39:02.232 10:47:46 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:39:02.232 10:47:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:02.232 10:47:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:02.232 10:47:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:02.232 10:47:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:02.232 10:47:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:02.232 10:47:47 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:39:02.232 10:47:47 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:39:02.232 10:47:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:02.490 10:47:47 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:39:02.490 10:47:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:39:02.749 10:47:47 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:39:02.749 10:47:47 keyring_file -- keyring/file.sh@77 -- # jq length 00:39:02.749 10:47:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:02.749 10:47:47 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:39:02.749 10:47:47 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.LxwQXZOssL 00:39:02.749 10:47:47 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.LxwQXZOssL 00:39:02.749 10:47:47 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:39:02.750 10:47:47 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.LxwQXZOssL 00:39:02.750 10:47:47 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:39:02.750 10:47:47 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:02.750 10:47:47 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:39:02.750 10:47:47 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:02.750 10:47:47 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LxwQXZOssL 00:39:02.750 10:47:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LxwQXZOssL 00:39:03.008 [2024-07-14 10:47:47.854125] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.LxwQXZOssL': 0100660 00:39:03.008 [2024-07-14 10:47:47.854149] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:39:03.009 request: 00:39:03.009 { 00:39:03.009 "name": "key0", 00:39:03.009 "path": "/tmp/tmp.LxwQXZOssL", 00:39:03.009 "method": "keyring_file_add_key", 00:39:03.009 "req_id": 1 00:39:03.009 } 00:39:03.009 Got JSON-RPC error response 00:39:03.009 response: 00:39:03.009 { 00:39:03.009 "code": -1, 00:39:03.009 "message": "Operation not permitted" 00:39:03.009 } 00:39:03.009 10:47:47 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:39:03.009 10:47:47 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:03.009 10:47:47 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:03.009 10:47:47 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:03.009 10:47:47 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.LxwQXZOssL 00:39:03.009 10:47:47 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LxwQXZOssL 00:39:03.009 10:47:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LxwQXZOssL 00:39:03.268 10:47:48 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.LxwQXZOssL 00:39:03.268 10:47:48 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:39:03.268 10:47:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:03.268 10:47:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:03.268 10:47:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:03.268 10:47:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:03.268 10:47:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:03.527 10:47:48 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:39:03.527 10:47:48 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:03.527 10:47:48 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:39:03.527 10:47:48 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:03.527 10:47:48 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:39:03.527 10:47:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:03.527 10:47:48 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:39:03.527 10:47:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:03.527 10:47:48 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:03.527 10:47:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:03.527 [2024-07-14 10:47:48.427655] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.LxwQXZOssL': No such file or directory 00:39:03.527 [2024-07-14 10:47:48.427679] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:39:03.527 [2024-07-14 10:47:48.427699] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:39:03.527 [2024-07-14 10:47:48.427705] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:03.527 [2024-07-14 10:47:48.427711] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:39:03.527 request: 00:39:03.527 { 00:39:03.527 "name": "nvme0", 00:39:03.527 "trtype": "tcp", 00:39:03.527 "traddr": "127.0.0.1", 00:39:03.527 "adrfam": "ipv4", 00:39:03.527 "trsvcid": "4420", 00:39:03.527 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:03.527 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:03.527 "prchk_reftag": false, 00:39:03.527 "prchk_guard": false, 00:39:03.527 "hdgst": false, 00:39:03.527 "ddgst": false, 00:39:03.527 "psk": "key0", 00:39:03.527 "method": "bdev_nvme_attach_controller", 00:39:03.527 "req_id": 1 00:39:03.527 } 00:39:03.527 Got JSON-RPC error response 00:39:03.527 response: 00:39:03.527 { 00:39:03.527 "code": -19, 00:39:03.527 "message": "No such device" 00:39:03.527 } 00:39:03.527 10:47:48 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:39:03.527 10:47:48 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:03.527 10:47:48 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:03.527 10:47:48 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:03.527 10:47:48 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:39:03.527 10:47:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:03.787 10:47:48 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:03.787 10:47:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:03.787 10:47:48 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:03.787 10:47:48 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:03.787 10:47:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:03.787 10:47:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:03.787 10:47:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.lsCrB5OhCi 00:39:03.787 10:47:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:03.787 10:47:48 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:03.787 10:47:48 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:39:03.787 10:47:48 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:03.787 10:47:48 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:39:03.787 10:47:48 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:39:03.787 10:47:48 keyring_file -- nvmf/common.sh@705 -- # python - 00:39:03.787 10:47:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lsCrB5OhCi 00:39:03.787 10:47:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.lsCrB5OhCi 00:39:03.787 10:47:48 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.lsCrB5OhCi 00:39:03.787 10:47:48 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lsCrB5OhCi 00:39:03.787 10:47:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lsCrB5OhCi 00:39:04.046 10:47:48 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:04.046 10:47:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:04.305 nvme0n1 00:39:04.305 10:47:49 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:39:04.305 10:47:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:04.305 10:47:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:04.305 10:47:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:04.305 10:47:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:04.305 10:47:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:04.305 10:47:49 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:39:04.305 10:47:49 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:39:04.305 10:47:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:04.564 10:47:49 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:39:04.564 10:47:49 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:39:04.564 10:47:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:04.564 10:47:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:04.564 10:47:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:04.822 10:47:49 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:39:04.822 10:47:49 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:39:04.822 10:47:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:04.822 10:47:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:04.822 10:47:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:04.822 10:47:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:04.822 10:47:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:04.822 10:47:49 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:39:04.822 10:47:49 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:04.822 10:47:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:05.081 10:47:49 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:39:05.081 10:47:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:05.081 10:47:49 keyring_file -- keyring/file.sh@104 -- # jq length 00:39:05.344 10:47:50 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:39:05.344 10:47:50 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lsCrB5OhCi 00:39:05.344 10:47:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lsCrB5OhCi 00:39:05.641 10:47:50 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.s1UZedJF6l 00:39:05.641 10:47:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.s1UZedJF6l 00:39:05.641 10:47:50 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:05.641 10:47:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:05.900 nvme0n1 00:39:05.900 10:47:50 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:39:05.900 10:47:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:39:06.160 10:47:50 keyring_file -- keyring/file.sh@112 -- # config='{ 00:39:06.160 "subsystems": [ 00:39:06.160 { 00:39:06.160 "subsystem": "keyring", 00:39:06.160 "config": [ 00:39:06.160 { 00:39:06.160 "method": "keyring_file_add_key", 00:39:06.160 "params": { 00:39:06.160 "name": "key0", 00:39:06.160 "path": "/tmp/tmp.lsCrB5OhCi" 00:39:06.160 } 00:39:06.160 }, 00:39:06.160 { 00:39:06.160 "method": "keyring_file_add_key", 00:39:06.160 "params": { 00:39:06.160 "name": "key1", 00:39:06.160 "path": "/tmp/tmp.s1UZedJF6l" 00:39:06.160 } 00:39:06.160 } 00:39:06.160 ] 00:39:06.160 }, 00:39:06.160 { 00:39:06.160 "subsystem": "iobuf", 00:39:06.160 "config": [ 00:39:06.160 { 00:39:06.160 "method": "iobuf_set_options", 00:39:06.160 "params": { 00:39:06.160 "small_pool_count": 8192, 00:39:06.160 "large_pool_count": 1024, 00:39:06.160 "small_bufsize": 8192, 00:39:06.160 "large_bufsize": 135168 00:39:06.160 } 00:39:06.160 } 00:39:06.160 ] 00:39:06.160 }, 00:39:06.160 { 00:39:06.160 "subsystem": "sock", 00:39:06.160 "config": [ 00:39:06.160 { 00:39:06.160 "method": "sock_set_default_impl", 00:39:06.160 "params": { 00:39:06.160 "impl_name": "posix" 00:39:06.160 } 00:39:06.160 }, 00:39:06.160 { 00:39:06.160 "method": "sock_impl_set_options", 00:39:06.160 "params": { 00:39:06.160 "impl_name": "ssl", 00:39:06.160 "recv_buf_size": 4096, 00:39:06.160 "send_buf_size": 4096, 00:39:06.160 "enable_recv_pipe": true, 00:39:06.160 "enable_quickack": false, 00:39:06.160 "enable_placement_id": 0, 00:39:06.160 "enable_zerocopy_send_server": true, 00:39:06.160 "enable_zerocopy_send_client": false, 00:39:06.160 "zerocopy_threshold": 0, 00:39:06.160 "tls_version": 0, 00:39:06.160 "enable_ktls": false 00:39:06.160 } 00:39:06.160 }, 00:39:06.160 { 00:39:06.160 "method": "sock_impl_set_options", 00:39:06.160 "params": { 00:39:06.160 "impl_name": "posix", 00:39:06.160 "recv_buf_size": 2097152, 00:39:06.160 "send_buf_size": 2097152, 00:39:06.160 "enable_recv_pipe": true, 00:39:06.160 "enable_quickack": false, 00:39:06.160 "enable_placement_id": 0, 00:39:06.160 "enable_zerocopy_send_server": true, 00:39:06.160 "enable_zerocopy_send_client": false, 00:39:06.160 "zerocopy_threshold": 0, 00:39:06.160 "tls_version": 0, 00:39:06.160 "enable_ktls": false 00:39:06.160 } 00:39:06.160 } 00:39:06.160 ] 00:39:06.160 }, 00:39:06.160 { 00:39:06.160 "subsystem": "vmd", 00:39:06.160 "config": [] 00:39:06.160 }, 00:39:06.160 { 00:39:06.160 "subsystem": "accel", 00:39:06.160 "config": [ 00:39:06.160 { 00:39:06.160 "method": "accel_set_options", 00:39:06.160 "params": { 00:39:06.160 "small_cache_size": 128, 00:39:06.160 "large_cache_size": 16, 00:39:06.160 "task_count": 2048, 00:39:06.160 "sequence_count": 2048, 00:39:06.160 "buf_count": 2048 00:39:06.160 } 00:39:06.160 } 00:39:06.160 ] 00:39:06.160 }, 00:39:06.160 { 00:39:06.160 "subsystem": "bdev", 00:39:06.160 "config": [ 00:39:06.160 { 00:39:06.160 "method": "bdev_set_options", 00:39:06.160 "params": { 00:39:06.160 "bdev_io_pool_size": 65535, 00:39:06.160 "bdev_io_cache_size": 256, 00:39:06.160 "bdev_auto_examine": true, 00:39:06.160 "iobuf_small_cache_size": 128, 00:39:06.161 "iobuf_large_cache_size": 16 00:39:06.161 } 00:39:06.161 }, 00:39:06.161 { 00:39:06.161 "method": "bdev_raid_set_options", 00:39:06.161 "params": { 00:39:06.161 "process_window_size_kb": 1024 00:39:06.161 } 00:39:06.161 }, 00:39:06.161 { 00:39:06.161 "method": "bdev_iscsi_set_options", 00:39:06.161 "params": { 00:39:06.161 "timeout_sec": 30 00:39:06.161 } 00:39:06.161 }, 00:39:06.161 { 00:39:06.161 "method": "bdev_nvme_set_options", 00:39:06.161 "params": { 00:39:06.161 "action_on_timeout": "none", 00:39:06.161 "timeout_us": 0, 00:39:06.161 "timeout_admin_us": 0, 00:39:06.161 "keep_alive_timeout_ms": 10000, 00:39:06.161 "arbitration_burst": 0, 00:39:06.161 "low_priority_weight": 0, 00:39:06.161 "medium_priority_weight": 0, 00:39:06.161 "high_priority_weight": 0, 00:39:06.161 "nvme_adminq_poll_period_us": 10000, 00:39:06.161 "nvme_ioq_poll_period_us": 0, 00:39:06.161 "io_queue_requests": 512, 00:39:06.161 "delay_cmd_submit": true, 00:39:06.161 "transport_retry_count": 4, 00:39:06.161 "bdev_retry_count": 3, 00:39:06.161 "transport_ack_timeout": 0, 00:39:06.161 "ctrlr_loss_timeout_sec": 0, 00:39:06.161 "reconnect_delay_sec": 0, 00:39:06.161 "fast_io_fail_timeout_sec": 0, 00:39:06.161 "disable_auto_failback": false, 00:39:06.161 "generate_uuids": false, 00:39:06.161 "transport_tos": 0, 00:39:06.161 "nvme_error_stat": false, 00:39:06.161 "rdma_srq_size": 0, 00:39:06.161 "io_path_stat": false, 00:39:06.161 "allow_accel_sequence": false, 00:39:06.161 "rdma_max_cq_size": 0, 00:39:06.161 "rdma_cm_event_timeout_ms": 0, 00:39:06.161 "dhchap_digests": [ 00:39:06.161 "sha256", 00:39:06.161 "sha384", 00:39:06.161 "sha512" 00:39:06.161 ], 00:39:06.161 "dhchap_dhgroups": [ 00:39:06.161 "null", 00:39:06.161 "ffdhe2048", 00:39:06.161 "ffdhe3072", 00:39:06.161 "ffdhe4096", 00:39:06.161 "ffdhe6144", 00:39:06.161 "ffdhe8192" 00:39:06.161 ] 00:39:06.161 } 00:39:06.161 }, 00:39:06.161 { 00:39:06.161 "method": "bdev_nvme_attach_controller", 00:39:06.161 "params": { 00:39:06.161 "name": "nvme0", 00:39:06.161 "trtype": "TCP", 00:39:06.161 "adrfam": "IPv4", 00:39:06.161 "traddr": "127.0.0.1", 00:39:06.161 "trsvcid": "4420", 00:39:06.161 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:06.161 "prchk_reftag": false, 00:39:06.161 "prchk_guard": false, 00:39:06.161 "ctrlr_loss_timeout_sec": 0, 00:39:06.161 "reconnect_delay_sec": 0, 00:39:06.161 "fast_io_fail_timeout_sec": 0, 00:39:06.161 "psk": "key0", 00:39:06.161 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:06.161 "hdgst": false, 00:39:06.161 "ddgst": false 00:39:06.161 } 00:39:06.161 }, 00:39:06.161 { 00:39:06.161 "method": "bdev_nvme_set_hotplug", 00:39:06.161 "params": { 00:39:06.161 "period_us": 100000, 00:39:06.161 "enable": false 00:39:06.161 } 00:39:06.161 }, 00:39:06.161 { 00:39:06.161 "method": "bdev_wait_for_examine" 00:39:06.161 } 00:39:06.161 ] 00:39:06.161 }, 00:39:06.161 { 00:39:06.161 "subsystem": "nbd", 00:39:06.161 "config": [] 00:39:06.161 } 00:39:06.161 ] 00:39:06.161 }' 00:39:06.161 10:47:50 keyring_file -- keyring/file.sh@114 -- # killprocess 2669735 00:39:06.161 10:47:50 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2669735 ']' 00:39:06.161 10:47:50 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2669735 00:39:06.161 10:47:50 keyring_file -- common/autotest_common.sh@953 -- # uname 00:39:06.161 10:47:50 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:06.161 10:47:50 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2669735 00:39:06.161 10:47:51 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:39:06.161 10:47:51 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:39:06.161 10:47:51 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2669735' 00:39:06.161 killing process with pid 2669735 00:39:06.161 10:47:51 keyring_file -- common/autotest_common.sh@967 -- # kill 2669735 00:39:06.161 Received shutdown signal, test time was about 1.000000 seconds 00:39:06.161 00:39:06.161 Latency(us) 00:39:06.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:06.161 =================================================================================================================== 00:39:06.161 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:06.161 10:47:51 keyring_file -- common/autotest_common.sh@972 -- # wait 2669735 00:39:06.421 10:47:51 keyring_file -- keyring/file.sh@117 -- # bperfpid=2671165 00:39:06.421 10:47:51 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2671165 /var/tmp/bperf.sock 00:39:06.421 10:47:51 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2671165 ']' 00:39:06.421 10:47:51 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:06.421 10:47:51 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:39:06.421 10:47:51 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:06.421 10:47:51 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:06.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:06.421 10:47:51 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:39:06.421 "subsystems": [ 00:39:06.421 { 00:39:06.421 "subsystem": "keyring", 00:39:06.421 "config": [ 00:39:06.421 { 00:39:06.421 "method": "keyring_file_add_key", 00:39:06.421 "params": { 00:39:06.421 "name": "key0", 00:39:06.421 "path": "/tmp/tmp.lsCrB5OhCi" 00:39:06.421 } 00:39:06.421 }, 00:39:06.421 { 00:39:06.421 "method": "keyring_file_add_key", 00:39:06.421 "params": { 00:39:06.421 "name": "key1", 00:39:06.421 "path": "/tmp/tmp.s1UZedJF6l" 00:39:06.421 } 00:39:06.421 } 00:39:06.421 ] 00:39:06.421 }, 00:39:06.421 { 00:39:06.421 "subsystem": "iobuf", 00:39:06.421 "config": [ 00:39:06.421 { 00:39:06.421 "method": "iobuf_set_options", 00:39:06.421 "params": { 00:39:06.421 "small_pool_count": 8192, 00:39:06.421 "large_pool_count": 1024, 00:39:06.421 "small_bufsize": 8192, 00:39:06.421 "large_bufsize": 135168 00:39:06.421 } 00:39:06.421 } 00:39:06.421 ] 00:39:06.421 }, 00:39:06.421 { 00:39:06.421 "subsystem": "sock", 00:39:06.421 "config": [ 00:39:06.421 { 00:39:06.421 "method": "sock_set_default_impl", 00:39:06.421 "params": { 00:39:06.421 "impl_name": "posix" 00:39:06.421 } 00:39:06.421 }, 00:39:06.421 { 00:39:06.421 "method": "sock_impl_set_options", 00:39:06.421 "params": { 00:39:06.421 "impl_name": "ssl", 00:39:06.421 "recv_buf_size": 4096, 00:39:06.421 "send_buf_size": 4096, 00:39:06.421 "enable_recv_pipe": true, 00:39:06.421 "enable_quickack": false, 00:39:06.421 "enable_placement_id": 0, 00:39:06.421 "enable_zerocopy_send_server": true, 00:39:06.421 "enable_zerocopy_send_client": false, 00:39:06.421 "zerocopy_threshold": 0, 00:39:06.421 "tls_version": 0, 00:39:06.421 "enable_ktls": false 00:39:06.421 } 00:39:06.421 }, 00:39:06.421 { 00:39:06.421 "method": "sock_impl_set_options", 00:39:06.421 "params": { 00:39:06.421 "impl_name": "posix", 00:39:06.421 "recv_buf_size": 2097152, 00:39:06.421 "send_buf_size": 2097152, 00:39:06.422 "enable_recv_pipe": true, 00:39:06.422 "enable_quickack": false, 00:39:06.422 "enable_placement_id": 0, 00:39:06.422 "enable_zerocopy_send_server": true, 00:39:06.422 "enable_zerocopy_send_client": false, 00:39:06.422 "zerocopy_threshold": 0, 00:39:06.422 "tls_version": 0, 00:39:06.422 "enable_ktls": false 00:39:06.422 } 00:39:06.422 } 00:39:06.422 ] 00:39:06.422 }, 00:39:06.422 { 00:39:06.422 "subsystem": "vmd", 00:39:06.422 "config": [] 00:39:06.422 }, 00:39:06.422 { 00:39:06.422 "subsystem": "accel", 00:39:06.422 "config": [ 00:39:06.422 { 00:39:06.422 "method": "accel_set_options", 00:39:06.422 "params": { 00:39:06.422 "small_cache_size": 128, 00:39:06.422 "large_cache_size": 16, 00:39:06.422 "task_count": 2048, 00:39:06.422 "sequence_count": 2048, 00:39:06.422 "buf_count": 2048 00:39:06.422 } 00:39:06.422 } 00:39:06.422 ] 00:39:06.422 }, 00:39:06.422 { 00:39:06.422 "subsystem": "bdev", 00:39:06.422 "config": [ 00:39:06.422 { 00:39:06.422 "method": "bdev_set_options", 00:39:06.422 "params": { 00:39:06.422 "bdev_io_pool_size": 65535, 00:39:06.422 "bdev_io_cache_size": 256, 00:39:06.422 "bdev_auto_examine": true, 00:39:06.422 "iobuf_small_cache_size": 128, 00:39:06.422 "iobuf_large_cache_size": 16 00:39:06.422 } 00:39:06.422 }, 00:39:06.422 { 00:39:06.422 "method": "bdev_raid_set_options", 00:39:06.422 "params": { 00:39:06.422 "process_window_size_kb": 1024 00:39:06.422 } 00:39:06.422 }, 00:39:06.422 { 00:39:06.422 "method": "bdev_iscsi_set_options", 00:39:06.422 "params": { 00:39:06.422 "timeout_sec": 30 00:39:06.422 } 00:39:06.422 }, 00:39:06.422 { 00:39:06.422 "method": "bdev_nvme_set_options", 00:39:06.422 "params": { 00:39:06.422 "action_on_timeout": "none", 00:39:06.422 "timeout_us": 0, 00:39:06.422 "timeout_admin_us": 0, 00:39:06.422 "keep_alive_timeout_ms": 10000, 00:39:06.422 "arbitration_burst": 0, 00:39:06.422 "low_priority_weight": 0, 00:39:06.422 "medium_priority_weight": 0, 00:39:06.422 "high_priority_weight": 0, 00:39:06.422 "nvme_adminq_poll_period_us": 10000, 00:39:06.422 "nvme_ioq_poll_period_us": 0, 00:39:06.422 "io_queue_requests": 512, 00:39:06.422 "delay_cmd_submit": true, 00:39:06.422 "transport_retry_count": 4, 00:39:06.422 "bdev_retry_count": 3, 00:39:06.422 "transport_ack_timeout": 0, 00:39:06.422 "ctrlr_loss_timeout_sec": 0, 00:39:06.422 "reconnect_delay_sec": 0, 00:39:06.422 "fast_io_fail_timeout_sec": 0, 00:39:06.422 "disable_auto_failback": false, 00:39:06.422 "generate_uuids": false, 00:39:06.422 "transport_tos": 0, 00:39:06.422 "nvme_error_stat": false, 00:39:06.422 "rdma_srq_size": 0, 00:39:06.422 "io_path_stat": false, 00:39:06.422 "allow_accel_sequence": false, 00:39:06.422 "rdma_max_cq_size": 0, 00:39:06.422 "rdma_cm_event_timeout_ms": 0, 00:39:06.422 "dhchap_digests": [ 00:39:06.422 "sha256", 00:39:06.422 "sha384", 00:39:06.422 "sha512" 00:39:06.422 ], 00:39:06.422 "dhchap_dhgroups": [ 00:39:06.422 "null", 00:39:06.422 "ffdhe2048", 00:39:06.422 "ffdhe3072", 00:39:06.422 "ffdhe4096", 00:39:06.422 "ffdhe6144", 00:39:06.422 "ffdhe8192" 00:39:06.422 ] 00:39:06.422 } 00:39:06.422 }, 00:39:06.422 { 00:39:06.422 "method": "bdev_nvme_attach_controller", 00:39:06.422 "params": { 00:39:06.422 "name": "nvme0", 00:39:06.422 "trtype": "TCP", 00:39:06.422 "adrfam": "IPv4", 00:39:06.422 "traddr": "127.0.0.1", 00:39:06.422 "trsvcid": "4420", 00:39:06.422 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:06.422 "prchk_reftag": false, 00:39:06.422 "prchk_guard": false, 00:39:06.422 "ctrlr_loss_timeout_sec": 0, 00:39:06.422 "reconnect_delay_sec": 0, 00:39:06.422 "fast_io_fail_timeout_sec": 0, 00:39:06.422 "psk": "key0", 00:39:06.422 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:06.422 "hdgst": false, 00:39:06.422 "ddgst": false 00:39:06.422 } 00:39:06.422 }, 00:39:06.422 { 00:39:06.422 "method": "bdev_nvme_set_hotplug", 00:39:06.422 "params": { 00:39:06.422 "period_us": 100000, 00:39:06.422 "enable": false 00:39:06.422 } 00:39:06.422 }, 00:39:06.422 { 00:39:06.422 "method": "bdev_wait_for_examine" 00:39:06.422 } 00:39:06.422 ] 00:39:06.422 }, 00:39:06.422 { 00:39:06.422 "subsystem": "nbd", 00:39:06.422 "config": [] 00:39:06.422 } 00:39:06.422 ] 00:39:06.422 }' 00:39:06.422 10:47:51 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:06.422 10:47:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:06.422 [2024-07-14 10:47:51.256815] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:39:06.422 [2024-07-14 10:47:51.256868] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671165 ] 00:39:06.422 EAL: No free 2048 kB hugepages reported on node 1 00:39:06.422 [2024-07-14 10:47:51.322742] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:06.422 [2024-07-14 10:47:51.360961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:06.681 [2024-07-14 10:47:51.514147] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:07.250 10:47:52 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:07.250 10:47:52 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:39:07.250 10:47:52 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:39:07.250 10:47:52 keyring_file -- keyring/file.sh@120 -- # jq length 00:39:07.250 10:47:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:07.509 10:47:52 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:39:07.509 10:47:52 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:39:07.509 10:47:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:07.509 10:47:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:07.509 10:47:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:07.509 10:47:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:07.509 10:47:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:07.509 10:47:52 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:39:07.509 10:47:52 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:39:07.509 10:47:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:07.509 10:47:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:07.509 10:47:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:07.509 10:47:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:07.509 10:47:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:07.768 10:47:52 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:39:07.768 10:47:52 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:39:07.768 10:47:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:39:07.768 10:47:52 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:39:08.028 10:47:52 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:39:08.028 10:47:52 keyring_file -- keyring/file.sh@1 -- # cleanup 00:39:08.028 10:47:52 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.lsCrB5OhCi /tmp/tmp.s1UZedJF6l 00:39:08.028 10:47:52 keyring_file -- keyring/file.sh@20 -- # killprocess 2671165 00:39:08.028 10:47:52 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2671165 ']' 00:39:08.028 10:47:52 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2671165 00:39:08.028 10:47:52 keyring_file -- common/autotest_common.sh@953 -- # uname 00:39:08.028 10:47:52 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:08.028 10:47:52 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2671165 00:39:08.028 10:47:52 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:39:08.028 10:47:52 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:39:08.028 10:47:52 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2671165' 00:39:08.028 killing process with pid 2671165 00:39:08.028 10:47:52 keyring_file -- common/autotest_common.sh@967 -- # kill 2671165 00:39:08.028 Received shutdown signal, test time was about 1.000000 seconds 00:39:08.028 00:39:08.028 Latency(us) 00:39:08.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:08.028 =================================================================================================================== 00:39:08.028 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:08.028 10:47:52 keyring_file -- common/autotest_common.sh@972 -- # wait 2671165 00:39:08.287 10:47:53 keyring_file -- keyring/file.sh@21 -- # killprocess 2669731 00:39:08.287 10:47:53 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2669731 ']' 00:39:08.287 10:47:53 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2669731 00:39:08.287 10:47:53 keyring_file -- common/autotest_common.sh@953 -- # uname 00:39:08.287 10:47:53 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:08.287 10:47:53 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2669731 00:39:08.288 10:47:53 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:08.288 10:47:53 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:08.288 10:47:53 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2669731' 00:39:08.288 killing process with pid 2669731 00:39:08.288 10:47:53 keyring_file -- common/autotest_common.sh@967 -- # kill 2669731 00:39:08.288 [2024-07-14 10:47:53.085303] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:39:08.288 10:47:53 keyring_file -- common/autotest_common.sh@972 -- # wait 2669731 00:39:08.546 00:39:08.546 real 0m10.941s 00:39:08.546 user 0m26.982s 00:39:08.546 sys 0m2.664s 00:39:08.546 10:47:53 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:08.546 10:47:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:08.546 ************************************ 00:39:08.546 END TEST keyring_file 00:39:08.546 ************************************ 00:39:08.546 10:47:53 -- common/autotest_common.sh@1142 -- # return 0 00:39:08.546 10:47:53 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:39:08.547 10:47:53 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:08.547 10:47:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:08.547 10:47:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:08.547 10:47:53 -- common/autotest_common.sh@10 -- # set +x 00:39:08.547 ************************************ 00:39:08.547 START TEST keyring_linux 00:39:08.547 ************************************ 00:39:08.547 10:47:53 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:08.804 * Looking for test storage... 00:39:08.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:08.804 10:47:53 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:08.804 10:47:53 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:08.804 10:47:53 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:39:08.804 10:47:53 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:08.804 10:47:53 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:08.804 10:47:53 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:08.804 10:47:53 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:08.804 10:47:53 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:08.804 10:47:53 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:08.804 10:47:53 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:08.804 10:47:53 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:08.804 10:47:53 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:08.804 10:47:53 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:08.804 10:47:53 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:39:08.804 10:47:53 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:39:08.804 10:47:53 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:08.804 10:47:53 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:08.804 10:47:53 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:08.804 10:47:53 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:08.804 10:47:53 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:08.804 10:47:53 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:08.804 10:47:53 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:08.804 10:47:53 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:08.805 10:47:53 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.805 10:47:53 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.805 10:47:53 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.805 10:47:53 keyring_linux -- paths/export.sh@5 -- # export PATH 00:39:08.805 10:47:53 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.805 10:47:53 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:39:08.805 10:47:53 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:08.805 10:47:53 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:08.805 10:47:53 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:08.805 10:47:53 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:08.805 10:47:53 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:08.805 10:47:53 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:08.805 10:47:53 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:08.805 10:47:53 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:08.805 10:47:53 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:08.805 10:47:53 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:08.805 10:47:53 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:08.805 10:47:53 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:39:08.805 10:47:53 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:39:08.805 10:47:53 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:39:08.805 10:47:53 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:39:08.805 10:47:53 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:08.805 10:47:53 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:39:08.805 10:47:53 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:08.805 10:47:53 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:08.805 10:47:53 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:39:08.805 10:47:53 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:08.805 10:47:53 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:08.805 10:47:53 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:39:08.805 10:47:53 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:08.805 10:47:53 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:39:08.805 10:47:53 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:39:08.805 10:47:53 keyring_linux -- nvmf/common.sh@705 -- # python - 00:39:08.805 10:47:53 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:39:08.805 10:47:53 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:39:08.805 /tmp/:spdk-test:key0 00:39:08.805 10:47:53 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:39:08.805 10:47:53 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:08.805 10:47:53 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:39:08.805 10:47:53 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:08.805 10:47:53 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:08.805 10:47:53 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:39:08.805 10:47:53 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:08.805 10:47:53 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:08.805 10:47:53 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:39:08.805 10:47:53 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:08.805 10:47:53 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:39:08.805 10:47:53 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:39:08.805 10:47:53 keyring_linux -- nvmf/common.sh@705 -- # python - 00:39:08.805 10:47:53 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:39:08.805 10:47:53 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:39:08.805 /tmp/:spdk-test:key1 00:39:08.805 10:47:53 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2671581 00:39:08.805 10:47:53 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2671581 00:39:08.805 10:47:53 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:08.805 10:47:53 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2671581 ']' 00:39:08.805 10:47:53 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:08.805 10:47:53 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:08.805 10:47:53 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:08.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:08.805 10:47:53 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:08.805 10:47:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:08.805 [2024-07-14 10:47:53.714714] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:39:08.805 [2024-07-14 10:47:53.714767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671581 ] 00:39:08.805 EAL: No free 2048 kB hugepages reported on node 1 00:39:08.805 [2024-07-14 10:47:53.778957] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:09.063 [2024-07-14 10:47:53.819674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:09.063 10:47:54 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:09.063 10:47:54 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:39:09.063 10:47:54 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:39:09.063 10:47:54 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:09.063 10:47:54 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:09.063 [2024-07-14 10:47:54.013573] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:09.063 null0 00:39:09.321 [2024-07-14 10:47:54.045625] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:09.321 [2024-07-14 10:47:54.045945] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:09.321 10:47:54 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:09.321 10:47:54 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:39:09.321 923649046 00:39:09.321 10:47:54 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:39:09.321 1018811000 00:39:09.321 10:47:54 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2671683 00:39:09.321 10:47:54 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2671683 /var/tmp/bperf.sock 00:39:09.321 10:47:54 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:39:09.321 10:47:54 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2671683 ']' 00:39:09.321 10:47:54 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:09.321 10:47:54 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:09.321 10:47:54 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:09.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:09.321 10:47:54 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:09.321 10:47:54 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:09.321 [2024-07-14 10:47:54.114785] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:39:09.321 [2024-07-14 10:47:54.114828] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671683 ] 00:39:09.321 EAL: No free 2048 kB hugepages reported on node 1 00:39:09.321 [2024-07-14 10:47:54.181953] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:09.321 [2024-07-14 10:47:54.222880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:09.321 10:47:54 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:09.321 10:47:54 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:39:09.321 10:47:54 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:39:09.321 10:47:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:39:09.578 10:47:54 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:39:09.578 10:47:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:09.835 10:47:54 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:09.835 10:47:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:10.092 [2024-07-14 10:47:54.815696] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:10.092 nvme0n1 00:39:10.092 10:47:54 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:39:10.092 10:47:54 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:39:10.092 10:47:54 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:10.092 10:47:54 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:10.092 10:47:54 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:10.092 10:47:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:10.350 10:47:55 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:39:10.350 10:47:55 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:10.350 10:47:55 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:39:10.350 10:47:55 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:39:10.350 10:47:55 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:10.350 10:47:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:10.350 10:47:55 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:39:10.350 10:47:55 keyring_linux -- keyring/linux.sh@25 -- # sn=923649046 00:39:10.350 10:47:55 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:39:10.350 10:47:55 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:10.350 10:47:55 keyring_linux -- keyring/linux.sh@26 -- # [[ 923649046 == \9\2\3\6\4\9\0\4\6 ]] 00:39:10.350 10:47:55 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 923649046 00:39:10.350 10:47:55 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:39:10.350 10:47:55 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:10.608 Running I/O for 1 seconds... 00:39:11.544 00:39:11.544 Latency(us) 00:39:11.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:11.544 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:11.544 nvme0n1 : 1.01 18239.51 71.25 0.00 0.00 6988.89 4331.07 10314.80 00:39:11.544 =================================================================================================================== 00:39:11.544 Total : 18239.51 71.25 0.00 0.00 6988.89 4331.07 10314.80 00:39:11.544 0 00:39:11.544 10:47:56 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:11.544 10:47:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:11.803 10:47:56 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:39:11.803 10:47:56 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:39:11.803 10:47:56 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:11.803 10:47:56 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:11.803 10:47:56 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:11.803 10:47:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:11.803 10:47:56 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:39:11.803 10:47:56 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:11.803 10:47:56 keyring_linux -- keyring/linux.sh@23 -- # return 00:39:11.803 10:47:56 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:11.803 10:47:56 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:39:11.803 10:47:56 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:11.803 10:47:56 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:39:11.803 10:47:56 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:11.803 10:47:56 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:39:11.803 10:47:56 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:11.804 10:47:56 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:11.804 10:47:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:12.063 [2024-07-14 10:47:56.934305] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:12.063 [2024-07-14 10:47:56.934536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1855bf0 (107): Transport endpoint is not connected 00:39:12.063 [2024-07-14 10:47:56.935531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1855bf0 (9): Bad file descriptor 00:39:12.063 [2024-07-14 10:47:56.936533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:12.063 [2024-07-14 10:47:56.936543] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:12.063 [2024-07-14 10:47:56.936551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:12.063 request: 00:39:12.063 { 00:39:12.063 "name": "nvme0", 00:39:12.063 "trtype": "tcp", 00:39:12.063 "traddr": "127.0.0.1", 00:39:12.063 "adrfam": "ipv4", 00:39:12.063 "trsvcid": "4420", 00:39:12.063 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:12.063 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:12.063 "prchk_reftag": false, 00:39:12.063 "prchk_guard": false, 00:39:12.063 "hdgst": false, 00:39:12.063 "ddgst": false, 00:39:12.063 "psk": ":spdk-test:key1", 00:39:12.063 "method": "bdev_nvme_attach_controller", 00:39:12.063 "req_id": 1 00:39:12.063 } 00:39:12.063 Got JSON-RPC error response 00:39:12.063 response: 00:39:12.063 { 00:39:12.063 "code": -5, 00:39:12.063 "message": "Input/output error" 00:39:12.063 } 00:39:12.063 10:47:56 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:39:12.063 10:47:56 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:12.063 10:47:56 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:12.063 10:47:56 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:12.063 10:47:56 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:39:12.063 10:47:56 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:12.063 10:47:56 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:39:12.063 10:47:56 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:39:12.063 10:47:56 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:39:12.063 10:47:56 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:12.063 10:47:56 keyring_linux -- keyring/linux.sh@33 -- # sn=923649046 00:39:12.063 10:47:56 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 923649046 00:39:12.063 1 links removed 00:39:12.063 10:47:56 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:12.063 10:47:56 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:39:12.063 10:47:56 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:39:12.063 10:47:56 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:39:12.063 10:47:56 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:39:12.063 10:47:56 keyring_linux -- keyring/linux.sh@33 -- # sn=1018811000 00:39:12.063 10:47:56 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1018811000 00:39:12.063 1 links removed 00:39:12.063 10:47:56 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2671683 00:39:12.063 10:47:56 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2671683 ']' 00:39:12.063 10:47:56 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2671683 00:39:12.063 10:47:56 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:39:12.063 10:47:56 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:12.063 10:47:56 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2671683 00:39:12.063 10:47:57 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:39:12.063 10:47:57 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:39:12.063 10:47:57 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2671683' 00:39:12.063 killing process with pid 2671683 00:39:12.063 10:47:57 keyring_linux -- common/autotest_common.sh@967 -- # kill 2671683 00:39:12.063 Received shutdown signal, test time was about 1.000000 seconds 00:39:12.063 00:39:12.063 Latency(us) 00:39:12.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:12.063 =================================================================================================================== 00:39:12.063 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:12.063 10:47:57 keyring_linux -- common/autotest_common.sh@972 -- # wait 2671683 00:39:12.323 10:47:57 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2671581 00:39:12.323 10:47:57 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2671581 ']' 00:39:12.323 10:47:57 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2671581 00:39:12.323 10:47:57 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:39:12.323 10:47:57 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:12.323 10:47:57 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2671581 00:39:12.323 10:47:57 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:12.323 10:47:57 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:12.323 10:47:57 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2671581' 00:39:12.323 killing process with pid 2671581 00:39:12.323 10:47:57 keyring_linux -- common/autotest_common.sh@967 -- # kill 2671581 00:39:12.323 10:47:57 keyring_linux -- common/autotest_common.sh@972 -- # wait 2671581 00:39:12.582 00:39:12.582 real 0m4.071s 00:39:12.582 user 0m7.501s 00:39:12.582 sys 0m1.413s 00:39:12.582 10:47:57 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:12.582 10:47:57 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:12.582 ************************************ 00:39:12.582 END TEST keyring_linux 00:39:12.582 ************************************ 00:39:12.582 10:47:57 -- common/autotest_common.sh@1142 -- # return 0 00:39:12.582 10:47:57 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:39:12.583 10:47:57 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:39:12.583 10:47:57 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:39:12.583 10:47:57 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:39:12.583 10:47:57 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:39:12.843 10:47:57 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:39:12.843 10:47:57 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:39:12.843 10:47:57 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:39:12.843 10:47:57 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:39:12.843 10:47:57 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:39:12.843 10:47:57 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:39:12.843 10:47:57 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:39:12.843 10:47:57 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:39:12.843 10:47:57 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:39:12.843 10:47:57 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:39:12.843 10:47:57 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:39:12.843 10:47:57 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:39:12.843 10:47:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:12.843 10:47:57 -- common/autotest_common.sh@10 -- # set +x 00:39:12.843 10:47:57 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:39:12.843 10:47:57 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:39:12.843 10:47:57 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:39:12.843 10:47:57 -- common/autotest_common.sh@10 -- # set +x 00:39:18.122 INFO: APP EXITING 00:39:18.122 INFO: killing all VMs 00:39:18.122 INFO: killing vhost app 00:39:18.122 INFO: EXIT DONE 00:39:20.029 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:39:20.029 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:39:20.288 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:39:20.288 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:39:20.288 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:39:20.288 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:39:20.288 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:39:20.288 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:39:20.288 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:39:20.288 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:39:20.288 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:39:20.288 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:39:20.288 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:39:20.288 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:39:20.546 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:39:20.546 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:39:20.546 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:39:23.837 Cleaning 00:39:23.837 Removing: /var/run/dpdk/spdk0/config 00:39:23.837 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:23.837 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:23.837 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:23.837 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:23.837 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:39:23.837 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:39:23.837 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:39:23.837 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:39:23.837 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:23.837 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:23.837 Removing: /var/run/dpdk/spdk1/config 00:39:23.837 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:39:23.837 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:39:23.837 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:39:23.837 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:39:23.837 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:39:23.837 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:39:23.837 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:39:23.837 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:39:23.837 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:39:23.837 Removing: /var/run/dpdk/spdk1/hugepage_info 00:39:23.837 Removing: /var/run/dpdk/spdk1/mp_socket 00:39:23.837 Removing: /var/run/dpdk/spdk2/config 00:39:23.837 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:39:23.837 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:39:23.837 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:39:23.837 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:39:23.837 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:39:23.837 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:39:23.837 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:39:23.837 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:39:23.837 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:39:23.837 Removing: /var/run/dpdk/spdk2/hugepage_info 00:39:23.837 Removing: /var/run/dpdk/spdk3/config 00:39:23.837 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:39:23.837 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:39:23.837 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:39:23.837 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:39:23.837 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:39:23.837 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:39:23.837 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:39:23.837 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:39:23.837 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:39:23.837 Removing: /var/run/dpdk/spdk3/hugepage_info 00:39:23.837 Removing: /var/run/dpdk/spdk4/config 00:39:23.837 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:39:23.837 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:39:23.837 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:39:23.837 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:39:23.837 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:39:23.837 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:39:23.837 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:39:23.837 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:39:23.837 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:39:23.837 Removing: /var/run/dpdk/spdk4/hugepage_info 00:39:23.837 Removing: /dev/shm/bdev_svc_trace.1 00:39:23.837 Removing: /dev/shm/nvmf_trace.0 00:39:23.837 Removing: /dev/shm/spdk_tgt_trace.pid2204567 00:39:23.837 Removing: /var/run/dpdk/spdk0 00:39:23.837 Removing: /var/run/dpdk/spdk1 00:39:23.837 Removing: /var/run/dpdk/spdk2 00:39:23.837 Removing: /var/run/dpdk/spdk3 00:39:23.837 Removing: /var/run/dpdk/spdk4 00:39:23.837 Removing: /var/run/dpdk/spdk_pid2020732 00:39:23.837 Removing: /var/run/dpdk/spdk_pid2202432 00:39:23.837 Removing: /var/run/dpdk/spdk_pid2203402 00:39:23.837 Removing: /var/run/dpdk/spdk_pid2204567 00:39:23.837 Removing: /var/run/dpdk/spdk_pid2204979 00:39:23.837 Removing: /var/run/dpdk/spdk_pid2205923 00:39:23.837 Removing: /var/run/dpdk/spdk_pid2206155 00:39:23.837 Removing: /var/run/dpdk/spdk_pid2207130 00:39:23.837 Removing: /var/run/dpdk/spdk_pid2207141 00:39:23.837 Removing: /var/run/dpdk/spdk_pid2207473 00:39:23.837 Removing: /var/run/dpdk/spdk_pid2208985 00:39:23.837 Removing: /var/run/dpdk/spdk_pid2210263 00:39:23.837 Removing: /var/run/dpdk/spdk_pid2210544 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2210824 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2211120 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2211284 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2211471 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2211695 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2211969 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2212704 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2216207 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2216465 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2216694 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2216733 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2217219 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2217240 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2217725 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2217737 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2217998 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2218100 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2218259 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2218413 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2218821 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2219071 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2219364 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2219620 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2219649 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2219755 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2220003 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2220243 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2220484 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2220738 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2220982 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2221234 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2221481 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2221728 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2221979 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2222224 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2222469 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2222721 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2222965 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2223208 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2223461 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2223713 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2223964 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2224225 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2224472 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2224724 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2224991 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2225186 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2228949 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2309749 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2314032 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2324017 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2329199 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2333194 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2333874 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2340393 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2346238 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2346321 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2347092 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2348017 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2348933 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2349399 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2349448 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2349734 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2349866 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2349868 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2350779 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2351689 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2352441 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2353076 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2353081 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2353312 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2354322 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2355293 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2363478 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2363865 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2368106 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2373735 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2376320 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2387008 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2395671 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2397419 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2398328 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2414904 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2418545 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2443916 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2448322 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2450002 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2451611 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2451845 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2451863 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2452018 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2452371 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2454193 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2454738 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2455223 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2457333 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2458015 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2458540 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2462760 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2468683 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2473474 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2509488 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2513388 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2519551 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2520895 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2522229 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2526434 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2530356 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2537652 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2537654 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2542143 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2542370 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2542596 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2543047 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2543057 00:39:23.838 Removing: /var/run/dpdk/spdk_pid2544449 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2546152 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2548162 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2549764 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2551377 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2553192 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2559041 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2559604 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2561351 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2562175 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2567875 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2570429 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2575776 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2581092 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2589942 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2597098 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2597148 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2614993 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2615507 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2616149 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2616622 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2617349 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2617832 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2618308 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2618784 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2623024 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2623257 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2629128 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2629386 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2631729 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2639739 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2639799 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2644864 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2646738 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2648577 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2649757 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2651763 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2652870 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2661589 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2662049 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2662532 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2664896 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2665436 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2665922 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2669731 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2669735 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2671165 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2671581 00:39:24.097 Removing: /var/run/dpdk/spdk_pid2671683 00:39:24.097 Clean 00:39:24.097 10:48:09 -- common/autotest_common.sh@1451 -- # return 0 00:39:24.097 10:48:09 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:39:24.097 10:48:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:24.097 10:48:09 -- common/autotest_common.sh@10 -- # set +x 00:39:24.356 10:48:09 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:39:24.356 10:48:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:24.356 10:48:09 -- common/autotest_common.sh@10 -- # set +x 00:39:24.356 10:48:09 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:24.356 10:48:09 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:39:24.356 10:48:09 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:39:24.356 10:48:09 -- spdk/autotest.sh@391 -- # hash lcov 00:39:24.356 10:48:09 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:39:24.356 10:48:09 -- spdk/autotest.sh@393 -- # hostname 00:39:24.356 10:48:09 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:39:24.356 geninfo: WARNING: invalid characters removed from testname! 00:39:46.321 10:48:29 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:47.258 10:48:31 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:49.161 10:48:33 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:51.067 10:48:35 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:52.447 10:48:37 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:54.354 10:48:39 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:56.261 10:48:40 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:56.261 10:48:41 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:56.261 10:48:41 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:39:56.261 10:48:41 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:56.261 10:48:41 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:56.261 10:48:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:56.261 10:48:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:56.261 10:48:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:56.261 10:48:41 -- paths/export.sh@5 -- $ export PATH 00:39:56.261 10:48:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:56.261 10:48:41 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:39:56.261 10:48:41 -- common/autobuild_common.sh@444 -- $ date +%s 00:39:56.261 10:48:41 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720946921.XXXXXX 00:39:56.261 10:48:41 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720946921.VmW4Hv 00:39:56.261 10:48:41 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:39:56.261 10:48:41 -- common/autobuild_common.sh@450 -- $ '[' -n v23.11 ']' 00:39:56.261 10:48:41 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:39:56.261 10:48:41 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:39:56.261 10:48:41 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:39:56.261 10:48:41 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:39:56.261 10:48:41 -- common/autobuild_common.sh@460 -- $ get_config_params 00:39:56.261 10:48:41 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:39:56.261 10:48:41 -- common/autotest_common.sh@10 -- $ set +x 00:39:56.262 10:48:41 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:39:56.262 10:48:41 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:39:56.262 10:48:41 -- pm/common@17 -- $ local monitor 00:39:56.262 10:48:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:56.262 10:48:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:56.262 10:48:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:56.262 10:48:41 -- pm/common@21 -- $ date +%s 00:39:56.262 10:48:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:56.262 10:48:41 -- pm/common@21 -- $ date +%s 00:39:56.262 10:48:41 -- pm/common@25 -- $ sleep 1 00:39:56.262 10:48:41 -- pm/common@21 -- $ date +%s 00:39:56.262 10:48:41 -- pm/common@21 -- $ date +%s 00:39:56.262 10:48:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720946921 00:39:56.262 10:48:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720946921 00:39:56.262 10:48:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720946921 00:39:56.262 10:48:41 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720946921 00:39:56.262 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720946921_collect-vmstat.pm.log 00:39:56.262 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720946921_collect-cpu-load.pm.log 00:39:56.262 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720946921_collect-cpu-temp.pm.log 00:39:56.262 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720946921_collect-bmc-pm.bmc.pm.log 00:39:57.201 10:48:42 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:39:57.201 10:48:42 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:39:57.201 10:48:42 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:57.201 10:48:42 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:39:57.201 10:48:42 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:39:57.201 10:48:42 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:39:57.201 10:48:42 -- spdk/autopackage.sh@19 -- $ timing_finish 00:39:57.201 10:48:42 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:57.201 10:48:42 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:39:57.201 10:48:42 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:57.201 10:48:42 -- spdk/autopackage.sh@20 -- $ exit 0 00:39:57.201 10:48:42 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:39:57.201 10:48:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:39:57.201 10:48:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:39:57.201 10:48:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:57.201 10:48:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:39:57.201 10:48:42 -- pm/common@44 -- $ pid=2683203 00:39:57.201 10:48:42 -- pm/common@50 -- $ kill -TERM 2683203 00:39:57.201 10:48:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:57.201 10:48:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:39:57.201 10:48:42 -- pm/common@44 -- $ pid=2683205 00:39:57.201 10:48:42 -- pm/common@50 -- $ kill -TERM 2683205 00:39:57.201 10:48:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:57.201 10:48:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:39:57.201 10:48:42 -- pm/common@44 -- $ pid=2683206 00:39:57.201 10:48:42 -- pm/common@50 -- $ kill -TERM 2683206 00:39:57.201 10:48:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:57.201 10:48:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:39:57.201 10:48:42 -- pm/common@44 -- $ pid=2683229 00:39:57.201 10:48:42 -- pm/common@50 -- $ sudo -E kill -TERM 2683229 00:39:57.461 + [[ -n 2083590 ]] 00:39:57.461 + sudo kill 2083590 00:39:57.471 [Pipeline] } 00:39:57.490 [Pipeline] // stage 00:39:57.497 [Pipeline] } 00:39:57.515 [Pipeline] // timeout 00:39:57.520 [Pipeline] } 00:39:57.537 [Pipeline] // catchError 00:39:57.544 [Pipeline] } 00:39:57.563 [Pipeline] // wrap 00:39:57.570 [Pipeline] } 00:39:57.586 [Pipeline] // catchError 00:39:57.596 [Pipeline] stage 00:39:57.598 [Pipeline] { (Epilogue) 00:39:57.614 [Pipeline] catchError 00:39:57.616 [Pipeline] { 00:39:57.631 [Pipeline] echo 00:39:57.633 Cleanup processes 00:39:57.640 [Pipeline] sh 00:39:57.928 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:57.928 2683324 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:39:57.928 2683603 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:57.942 [Pipeline] sh 00:39:58.257 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:58.257 ++ grep -v 'sudo pgrep' 00:39:58.257 ++ awk '{print $1}' 00:39:58.257 + sudo kill -9 2683324 00:39:58.270 [Pipeline] sh 00:39:58.571 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:08.562 [Pipeline] sh 00:40:08.844 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:08.844 Artifacts sizes are good 00:40:08.858 [Pipeline] archiveArtifacts 00:40:08.864 Archiving artifacts 00:40:09.112 [Pipeline] sh 00:40:09.396 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:40:09.412 [Pipeline] cleanWs 00:40:09.422 [WS-CLEANUP] Deleting project workspace... 00:40:09.422 [WS-CLEANUP] Deferred wipeout is used... 00:40:09.428 [WS-CLEANUP] done 00:40:09.429 [Pipeline] } 00:40:09.450 [Pipeline] // catchError 00:40:09.463 [Pipeline] sh 00:40:09.746 + logger -p user.info -t JENKINS-CI 00:40:09.755 [Pipeline] } 00:40:09.772 [Pipeline] // stage 00:40:09.777 [Pipeline] } 00:40:09.794 [Pipeline] // node 00:40:09.800 [Pipeline] End of Pipeline 00:40:09.836 Finished: SUCCESS